Sort by:
Page 253 of 3703697 results

Assessment of quantitative staging PET/computed tomography parameters using machine learning for early detection of progression in diffuse large B-cell lymphoma.

Aksu A, Us A, Küçüker KA, Solmaz Ş, Turgut B

pubmed logopapersJun 30 2025
This study aimed to investigate the role of volumetric and dissemination parameters obtained from pretreatment 18-fluorodeoxyglucose PET/computed tomography (18F-FDG PET/CT) in predicting progression/relapse in patients with diffuse large B-cell lymphoma (DLBCL) with machine learning algorithms. Patients diagnosed with DLBCL histopathologically, treated with rituximab, cyclophosphamide, doxorubicin, vincristine, and prednisone, and followed for at least 1 year were reviewed retrospectively. Quantitative parameters such as tumor volume [total metabolic tumor volume (tMTV)], tumor burden [total lesion glycolysis (tTLG)], and the longest distance between two tumor foci (Dmax) were obtained from PET images with a standard uptake value threshold of 4.0. The MTV obtained from the volume of interest with the highest volume was noted as metabolic bulk volume (MBV). By analyzing the patients' PET parameters and clinical information with machine learning algorithms, models that attempt to predict progression/recurrence over 1 year were obtained. Of the 90 patients included, 16 had progression within 1 year. Significant differences were found in tMTV, tTLG, MBV, and Dmax values between patients with and without progression. The area under curve (AUC) of the model obtained with clinical data was 0.701. While a model with an AUC of 0.871 was obtained with a random forest algorithm using PET parameters, the model obtained with the Naive Bayes algorithm including clinical data in PET parameters had an AUC of 0.838. Using quantitative parameters derived from staging PET with machine learning algorithms may enable us to detect early progression in patients with DLBCL and improve early risk stratification and guide treatment decisions in these patients.

A Deep Learning-Based De-Artifact Diffusion Model for Removing Motion Artifacts in Knee MRI.

Li Y, Gong T, Zhou Q, Wang H, Yan X, Xi Y, Shi Z, Deng W, Shi F, Wang Y

pubmed logopapersJun 30 2025
Motion artifacts are common for knee MRI, which usually lead to rescanning. Effective removal of motion artifacts would be clinically useful. To construct an effective deep learning-based model to remove motion artifacts for knee MRI using real-world data. Retrospective. Model construction: 90 consecutive patients (1997 2D slices) who had knee MRI images with motion artifacts paired with immediately rescanned images without artifacts served as ground truth. Internal test dataset: 25 patients (795 slices) from another period; external test dataset: 39 patients (813 slices) from another hospital. 3-T/1.5-T knee MRI with T1-weighted imaging, T2-weighted imaging, and proton-weighted imaging. A deep learning-based supervised conditional diffusion model was constructed. Objective metrics (root mean square error [RMSE], peak signal-to-noise ratio [PSNR], structural similarity [SSIM]) and subjective ratings were used for image quality assessment, which were compared with three other algorithms (enhanced super-resolution [ESR], enhanced deep super-resolution, and ESR using a generative adversarial network). Diagnostic performance of the output images was compared with the rescanned images. The Kappa Test, Pearson chi-square test, Fredman's rank-sum test, and the marginal homogeneity test. A p value < 0.05 was considered statistically significant. Subjective ratings showed significant improvements in the output images compared to the input, with no significant difference from the ground truth. The constructed method demonstrated the smallest RMSE (11.44  <math xmlns="http://www.w3.org/1998/Math/MathML"> <semantics><mrow><mo>±</mo></mrow> <annotation>$$ \pm $$</annotation></semantics> </math>  5.47 in the validation cohort; 13.95  <math xmlns="http://www.w3.org/1998/Math/MathML"> <semantics><mrow><mo>±</mo></mrow> <annotation>$$ \pm $$</annotation></semantics> </math>  4.32 in the external test cohort), the largest PSNR (27.61  <math xmlns="http://www.w3.org/1998/Math/MathML"> <semantics><mrow><mo>±</mo></mrow> <annotation>$$ \pm $$</annotation></semantics> </math>  3.20 in the validation cohort; 25.64  <math xmlns="http://www.w3.org/1998/Math/MathML"> <semantics><mrow><mo>±</mo></mrow> <annotation>$$ \pm $$</annotation></semantics> </math>  2.67 in the external test cohort) and SSIM (0.97  <math xmlns="http://www.w3.org/1998/Math/MathML"> <semantics><mrow><mo>±</mo></mrow> <annotation>$$ \pm $$</annotation></semantics> </math>  0.04 in the validation cohort; 0.94  <math xmlns="http://www.w3.org/1998/Math/MathML"> <semantics><mrow><mo>±</mo></mrow> <annotation>$$ \pm $$</annotation></semantics> </math>  0.04 in the external test cohort) compared to the other three algorithms. The output images achieved comparable diagnostic capability as the ground truth for multiple anatomical structures. The constructed model exhibited feasibility and effectiveness, and outperformed multiple other algorithms for removing motion artifacts in knee MRI. Level 3. Stage 2.

Enhancing weakly supervised data augmentation networks for thyroid nodule assessment using traditional and doppler ultrasound images.

Keatmanee C, Songsaeng D, Klabwong S, Nakaguro Y, Kunapinun A, Ekpanyapong M, Dailey MN

pubmed logopapersJun 30 2025
Thyroid ultrasound (US) is an essential tool for detecting and characterizing thyroid nodules. In this study, we propose an innovative approach to enhance thyroid nodule assessment by integrating Doppler US images with grayscale US images through weakly supervised data augmentation networks (WSDAN). Our method reduces background noise by replacing inefficient augmentation strategies, such as random cropping, with an advanced technique guided by bounding boxes derived from Doppler US images. This targeted augmentation significantly improves model performance in both classification and localization of thyroid nodules. The training dataset comprises 1288 paired grayscale and Doppler US images, with an additional 190 pairs used for three-fold cross-validation. To evaluate the model's efficacy, we tested it on a separate set of 190 grayscale US images. Compared to five state-of-the-art models and the original WSDAN, our Enhanced WSDAN model achieved superior performance. For classification, it reached an accuracy of 91%. For localization, it achieved Dice and Jaccard indices of 75% and 87%, respectively, demonstrating its potential as a valuable clinical tool.

Towards 3D Semantic Image Synthesis for Medical Imaging

Wenwu Tang, Khaled Seyam, Bin Yang

arxiv logopreprintJun 30 2025
In the medical domain, acquiring large datasets is challenging due to both accessibility issues and stringent privacy regulations. Consequently, data availability and privacy protection are major obstacles to applying machine learning in medical imaging. To address this, our study proposes the Med-LSDM (Latent Semantic Diffusion Model), which operates directly in the 3D domain and leverages de-identified semantic maps to generate synthetic data as a method of privacy preservation and data augmentation. Unlike many existing methods that focus on generating 2D slices, Med-LSDM is designed specifically for 3D semantic image synthesis, making it well-suited for applications requiring full volumetric data. Med-LSDM incorporates a guiding mechanism that controls the 3D image generation process by applying a diffusion model within the latent space of a pre-trained VQ-GAN. By operating in the compressed latent space, the model significantly reduces computational complexity while still preserving critical 3D spatial details. Our approach demonstrates strong performance in 3D semantic medical image synthesis, achieving a 3D-FID score of 0.0054 on the conditional Duke Breast dataset and similar Dice scores (0.70964) to those of real images (0.71496). These results demonstrate that the synthetic data from our model have a small domain gap with real data and are useful for data augmentation.

Prediction Crohn's Disease Activity Using Computed Tomography Enterography-Based Radiomics and Serum Markers.

Wang P, Liu Y, Wang Y

pubmed logopapersJun 30 2025
Accurate stratification of the activity index of Crohn's disease (CD) using computed tomography enterography (CTE) radiomics and serum markers can aid in predicting disease progression and assist physicians in personalizing therapeutic regimens for patients with CD. This retrospective study enrolled 233 patients diagnosed with CD between January 2019 and August 2024. Patients were divided into training and testing cohorts at a ratio of 7:3 and further categorized into remission, mild active phase, and moderate-severe active phase groups based on simple endoscopic score for CD (SEC-CD). Radiomics features were extracted from CTE venous images, and T-test and least absolute shrinkage and selection operator (LASSO) regression were applied for feature selection. The serum markers were selected based on the variance analysis. We also developed a random forest (RF) model for multi-class stratification of CD. The model performance was evaluated by the area under the receiver operating characteristic curve (AUC) and quantified the contribution of each feature in the dataset to CD activity via Shapley additive exPlanations (SHAP) values. Finally, we enrolled gender, radiomics scores, and serum scores to develop a nomogram model to verify the effectiveness of feature extraction. 14 non-zero coefficient radiomics features and six serum markers with significant differences (P<0.01) were ultimately selected to predict CD activity. The AUC (micro/macro) for the ensemble machine learning model combining the radiomics features and serum markers is 0.931/0.928 for three-class. The AUC for the remission phase, the mild active phase, and the moderate-severe active phase were 0.983, 0.852, and 0.917, respectively. The mean AUC for the nomogram model was 0.940. A radiomics model was developed by integrating radiomics and serum markers of CD patients, achieving enhanced consistency with SEC-CD in grade CD. This model has the potential to assist clinicians in accurate diagnosis and treatment.

FD-DiT: Frequency Domain-Directed Diffusion Transformer for Low-Dose CT Reconstruction

Qiqing Liu, Guoquan Wei, Zekun Zhou, Yiyang Wen, Liu Shi, Qiegen Liu

arxiv logopreprintJun 30 2025
Low-dose computed tomography (LDCT) reduces radiation exposure but suffers from image artifacts and loss of detail due to quantum and electronic noise, potentially impacting diagnostic accuracy. Transformer combined with diffusion models has been a promising approach for image generation. Nevertheless, existing methods exhibit limitations in preserving finegrained image details. To address this issue, frequency domain-directed diffusion transformer (FD-DiT) is proposed for LDCT reconstruction. FD-DiT centers on a diffusion strategy that progressively introduces noise until the distribution statistically aligns with that of LDCT data, followed by denoising processing. Furthermore, we employ a frequency decoupling technique to concentrate noise primarily in high-frequency domain, thereby facilitating effective capture of essential anatomical structures and fine details. A hybrid denoising network is then utilized to optimize the overall data reconstruction process. To enhance the capability in recognizing high-frequency noise, we incorporate sliding sparse local attention to leverage the sparsity and locality of shallow-layer information, propagating them via skip connections for improving feature representation. Finally, we propose a learnable dynamic fusion strategy for optimal component integration. Experimental results demonstrate that at identical dose levels, LDCT images reconstructed by FD-DiT exhibit superior noise and artifact suppression compared to state-of-the-art methods.

Uncertainty-aware Diffusion and Reinforcement Learning for Joint Plane Localization and Anomaly Diagnosis in 3D Ultrasound

Yuhao Huang, Yueyue Xu, Haoran Dou, Jiaxiao Deng, Xin Yang, Hongyu Zheng, Dong Ni

arxiv logopreprintJun 30 2025
Congenital uterine anomalies (CUAs) can lead to infertility, miscarriage, preterm birth, and an increased risk of pregnancy complications. Compared to traditional 2D ultrasound (US), 3D US can reconstruct the coronal plane, providing a clear visualization of the uterine morphology for assessing CUAs accurately. In this paper, we propose an intelligent system for simultaneous automated plane localization and CUA diagnosis. Our highlights are: 1) we develop a denoising diffusion model with local (plane) and global (volume/text) guidance, using an adaptive weighting strategy to optimize attention allocation to different conditions; 2) we introduce a reinforcement learning-based framework with unsupervised rewards to extract the key slice summary from redundant sequences, fully integrating information across multiple planes to reduce learning difficulty; 3) we provide text-driven uncertainty modeling for coarse prediction, and leverage it to adjust the classification probability for overall performance improvement. Extensive experiments on a large 3D uterine US dataset show the efficacy of our method, in terms of plane localization and CUA diagnosis. Code is available at https://github.com/yuhoo0302/CUA-US.

Self-Supervised Multiview Xray Matching

Mohamad Dabboussi, Malo Huard, Yann Gousseau, Pietro Gori

arxiv logopreprintJun 30 2025
Accurate interpretation of multi-view radiographs is crucial for diagnosing fractures, muscular injuries, and other anomalies. While significant advances have been made in AI-based analysis of single images, current methods often struggle to establish robust correspondences between different X-ray views, an essential capability for precise clinical evaluations. In this work, we present a novel self-supervised pipeline that eliminates the need for manual annotation by automatically generating a many-to-many correspondence matrix between synthetic X-ray views. This is achieved using digitally reconstructed radiographs (DRR), which are automatically derived from unannotated CT volumes. Our approach incorporates a transformer-based training phase to accurately predict correspondences across two or more X-ray views. Furthermore, we demonstrate that learning correspondences among synthetic X-ray views can be leveraged as a pretraining strategy to enhance automatic multi-view fracture detection on real data. Extensive evaluations on both synthetic and real X-ray datasets show that incorporating correspondences improves performance in multi-view fracture classification.

MedSAM-CA: A CNN-Augmented ViT with Attention-Enhanced Multi-Scale Fusion for Medical Image Segmentation

Peiting Tian, Xi Chen, Haixia Bi, Fan Li

arxiv logopreprintJun 30 2025
Medical image segmentation plays a crucial role in clinical diagnosis and treatment planning, where accurate boundary delineation is essential for precise lesion localization, organ identification, and quantitative assessment. In recent years, deep learning-based methods have significantly advanced segmentation accuracy. However, two major challenges remain. First, the performance of these methods heavily relies on large-scale annotated datasets, which are often difficult to obtain in medical scenarios due to privacy concerns and high annotation costs. Second, clinically challenging scenarios, such as low contrast in certain imaging modalities and blurry lesion boundaries caused by malignancy, still pose obstacles to precise segmentation. To address these challenges, we propose MedSAM-CA, an architecture-level fine-tuning approach that mitigates reliance on extensive manual annotations by adapting the pretrained foundation model, Medical Segment Anything (MedSAM). MedSAM-CA introduces two key components: the Convolutional Attention-Enhanced Boundary Refinement Network (CBR-Net) and the Attention-Enhanced Feature Fusion Block (Atte-FFB). CBR-Net operates in parallel with the MedSAM encoder to recover boundary information potentially overlooked by long-range attention mechanisms, leveraging hierarchical convolutional processing. Atte-FFB, embedded in the MedSAM decoder, fuses multi-level fine-grained features from skip connections in CBR-Net with global representations upsampled within the decoder to enhance boundary delineation accuracy. Experiments on publicly available datasets covering dermoscopy, CT, and MRI imaging modalities validate the effectiveness of MedSAM-CA. On dermoscopy dataset, MedSAM-CA achieves 94.43% Dice with only 2% of full training data, reaching 97.25% of full-data training performance, demonstrating strong effectiveness in low-resource clinical settings.

Contrastive Learning with Diffusion Features for Weakly Supervised Medical Image Segmentation

Dewen Zeng, Xinrong Hu, Yu-Jen Chen, Yawen Wu, Xiaowei Xu, Yiyu Shi

arxiv logopreprintJun 30 2025
Weakly supervised semantic segmentation (WSSS) methods using class labels often rely on class activation maps (CAMs) to localize objects. However, traditional CAM-based methods struggle with partial activations and imprecise object boundaries due to optimization discrepancies between classification and segmentation. Recently, the conditional diffusion model (CDM) has been used as an alternative for generating segmentation masks in WSSS, leveraging its strong image generation capabilities tailored to specific class distributions. By modifying or perturbing the condition during diffusion sampling, the related objects can be highlighted in the generated images. Yet, the saliency maps generated by CDMs are prone to noise from background alterations during reverse diffusion. To alleviate the problem, we introduce Contrastive Learning with Diffusion Features (CLDF), a novel method that uses contrastive learning to train a pixel decoder to map the diffusion features from a frozen CDM to a low-dimensional embedding space for segmentation. Specifically, we integrate gradient maps generated from CDM external classifier with CAMs to identify foreground and background pixels with fewer false positives/negatives for contrastive learning, enabling robust pixel embedding learning. Experimental results on four segmentation tasks from two public medical datasets demonstrate that our method significantly outperforms existing baselines.
Page 253 of 3703697 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.