Sort by:
Page 52 of 2252247 results

Limited-angle SPECT image reconstruction using deep image prior.

Hori K, Hashimoto F, Koyama K, Hashimoto T

pubmed logopapersJun 30 2025
[Objective] In single-photon emission computed tomography (SPECT) image reconstruction, limited-angle conditions lead to a loss of frequency components, which distort the reconstructed tomographic image along directions corresponding to the non-collected projection angle range. Although conventional iterative image reconstruction methods have been used to improve the reconstructed images in limited-angle conditions, the image quality is still unsuitable for clinical use. We propose a limited-angle SPECT image reconstruction method that uses an end-to-end deep image prior (DIP) framework to improve reconstructed image quality.
[Approach] The proposed limited-angle SPECT image reconstruction is an end-to-end DIP framework which incorporates a forward projection model into the loss function to optimise the neural network. By also incorporating a binary mask that indicates whether each data point in the measured projection data has been collected, the proposed method restores the non-collected projection data and reconstructs a less distorted image.
[Main results] The proposed method was evaluated using 20 numerical phantoms and clinical patient data. In numerical simulations, the proposed method outperformed existing back-projection-based methods in terms of peak signal-to-noise ratio and structural similarity index measure. We analysed the reconstructed tomographic images in the frequency domain using an object-specific modulation transfer function, in simulations and on clinical patient data, to evaluate the response of the reconstruction method to different frequencies of the object. The proposed method significantly improved the response to almost all spatial frequencies, even in the non-collected projection angle range. The results demonstrate that the proposed method reconstructs a less distorted tomographic image.
[Significance] The proposed end-to-end DIP-based reconstruction method restores lost frequency components and mitigates image distortion under limited-angle conditions by incorporating a binary mask into the loss function.

In-silico CT simulations of deep learning generated heterogeneous phantoms.

Salinas CS, Magudia K, Sangal A, Ren L, Segars PW

pubmed logopapersJun 30 2025
Current virtual imaging phantoms primarily emphasize geometric
accuracy of anatomical structures. However, to enhance realism, it is also important
to incorporate intra-organ detail. Because biological tissues are heterogeneous in
composition, virtual phantoms should reflect this by including realistic intra-organ
texture and material variation.
We propose training two 3D Double U-Net conditional generative adversarial
networks (3D DUC-GAN) to generate sixteen unique textures that encompass organs
found within the torso. The model was trained on 378 CT image-segmentation
pairs taken from a publicly available dataset with 18 additional pairs reserved for
testing. Textured phantoms were generated and imaged using DukeSim, a virtual CT
simulation platform.
Results showed that the deep learning model was able to synthesize realistic
heterogeneous phantoms from a set of homogeneous phantoms. These phantoms were
compared with original CT scans and had a mean absolute difference of 46.15 ± 1.06
HU. The structural similarity index (SSIM) and peak signal-to-noise ratio (PSNR)
were 0.86 ± 0.004 and 28.62 ± 0.14, respectively. The maximum mean discrepancy
between the generated and actual distribution was 0.0016. These metrics marked
an improvement of 27%, 5.9%, 6.2%, and 28% respectively, compared to current
homogeneous texture methods. The generated phantoms that underwent a virtual
CT scan had a closer visual resemblance to the true CT scan compared to the previous
method.
The resulting heterogeneous phantoms offer a significant step toward more realistic
in silico trials, enabling enhanced simulation of imaging procedures with greater fidelity
to true anatomical variation.

Precision and Personalization: How Large Language Models Redefining Diagnostic Accuracy in Personalized Medicine - A Systematic Literature Review.

Aththanagoda AKNL, Kulathilake KASH, Abdullah NA

pubmed logopapersJun 30 2025
Personalized medicine aims to tailor medical treatments to the unique characteristics of each patient, but its effectiveness relies on achieving diagnostic accuracy to fully understand individual variability in disease response and treatment efficacy. This systematic literature review explores the role of large language models (LLMs) in enhancing diagnostic precision and supporting the advancement of personalized medicine. A comprehensive search was conducted across Web of Science, Science Direct, Scopus, and IEEE Xplore, targeting peer-reviewed articles published in English between January 2020 and March 2025 that applied LLMs within personalized medicine contexts. Following PRISMA guidelines, 39 relevant studies were selected and systematically analyzed. The findings indicate a growing integration of LLMs across key domains such as clinical informatics, medical imaging, patient-specific diagnosis, and clinical decision support. LLMs have shown potential in uncovering subtle data patterns critical for accurate diagnosis and personalized treatment planning. This review highlights the expanding role of LLMs in improving diagnostic accuracy in personalized medicine, offering insights into their performance, applications, and challenges, while also acknowledging limitations in generalizability due to variable model performance and dataset biases. The review highlights the importance of addressing challenges related to data privacy, model interpretability, and reliability across diverse clinical scenarios. For successful clinical integration, future research must focus on refining LLM technologies, ensuring ethical standards, and validating models continuously to safeguard effective and responsible use in healthcare environments.

Thoracic staging of lung cancers by <sup>18</sup>FDG-PET/CT: impact of artificial intelligence on the detection of associated pulmonary nodules.

Trabelsi M, Romdhane H, Ben-Sellem D

pubmed logopapersJun 30 2025
This study focuses on automating the classification of certain thoracic lung cancer stages in 3D <sup>18</sup>FDG-PET/CT images according to the 9th Edition of the TNM Classification for Lung Cancer (2024). By leveraging advanced segmentation and classification techniques, we aim to enhance the accuracy of distinguishing between T4 (pulmonary nodules) Thoracic M0 and M1a (pulmonary nodules) stages. Precise segmentation of pulmonary lobes using the Pulmonary Toolkit enables the identification of tumor locations and additional malignant nodules, ensuring reliable differentiation between ipsilateral and contralateral spread. A modified ResNet-50 model is employed to classify the segmented regions. The performance evaluation shows that the model achieves high accuracy. The unchanged class has the best recall 93% and an excellent F1 score 91%. The M1a (pulmonary nodules) class performs well with an F1 score of 94%, though recall is slightly lower 91%. For T4 (pulmonary nodules) Thoracic M0, the model shows balanced performance with an F1 score of 87%. The overall accuracy is 87%, indicating a robust classification model.

Enhanced abdominal multi-organ segmentation with 3D UNet and UNet +  + deep neural networks utilizing the MONAI framework.

Tejashwini PS, Thriveni J, Venugopal KR

pubmed logopapersJun 30 2025
Accurate segmentation of organs in the abdomen is a primary requirement for any medical analysis and treatment planning. In this study, we propose an approach based on 3D UNet and UNet +  + architectures implemented in the MONAI framework for addressing challenges that arise due to anatomical variability, complex shape rendering of organs, and noise in CT/MRI scans. The models can analyze information in three dimensions from volumetric data, making use of skip and dense connections, and optimizing the parameters using Secretary Bird Optimization (SBO), which together help in better feature extraction and boundary delineation of the structures of interest across sets of multi-organ tissues. The developed model's performance was evaluated on multiple datasets, ranging from Pancreas-CT to Liver-CT and BTCV. The results indicated that on the Pancreas-CT dataset, a DSC of 94.54% was achieved for 3D UNet, while a slightly higher DSC of 95.62% was achieved for 3D UNet +  +. Both models performed well on the Liver-CT dataset, with 3D UNet acquiring a DSC score of 95.67% and 3D UNet +  + a DSC score of 97.36%. And in the case of the BTCV dataset, both models had DSC values ranging from 93.42 to 95.31%. These results demonstrate the robustness and efficiency of the models presented for clinical applications and medical research in multi-organ segmentation. This study validates the proposed architectures, underpinning and accentuating accuracy in medical imaging, creating avenues for scalable solutions for complex abdominal-imaging tasks.

Bidirectional Prototype-Guided Consistency Constraint for Semi-Supervised Fetal Ultrasound Image Segmentation.

Lyu C, Han K, Liu L, Chen J, Ma L, Pang Z, Liu Z

pubmed logopapersJun 30 2025
Fetal ultrasound (US) image segmentation plays an important role in fetal development assessment, maternal pregnancy management, and intrauterine surgery planning. However, obtaining large-scale, accurately annotated fetal US imaging data is time-consuming and labor-intensive, posing challenges to the application of deep learning in this field. To address this challenge, we propose a semi-supervised fetal US image segmentation method based on bidirectional prototype-guided consistency constraint (BiPCC). BiPCC utilizes the prototype to bridge labeled and unlabeled data and establishes interaction between them. Specifically, the model generates pseudo-labels using prototypes from labeled data and then utilizes these pseudo-labels to generate pseudo-prototypes for segmenting the labeled data inversely, thereby achieving bidirectional consistency. Additionally, uncertainty-based cross-supervision is incorporated to provide additional supervision signals, thereby enhancing the quality of pseudo-labels. Extensive experiments on two fetal US datasets demonstrate that BiPCC outperforms state-of-the-art methods for semi-supervised fetal US segmentation. Furthermore, experimental results on two additional medical segmentation datasets exhibit BiPCC's outstanding generalization capability for diverse medical image segmentation tasks. Our proposed method offers a novel insight for semi-supervised fetal US image segmentation and holds promise for further advancing the development of intelligent healthcare.

Development and validation of a prognostic prediction model for lumbar-disc herniation based on machine learning and fusion of clinical text data and radiomic features.

Wang Z, Zhang H, Li Y, Zhang X, Liu J, Ren Z, Qin D, Zhao X

pubmed logopapersJun 30 2025
Based on preoperative clinical text data and lumbar magnetic resonance imaging (MRI), we applied machine learning (ML) algorithms to construct a model that would predict early recurrence in lumbar-disc herniation (LDH) patients who underwent percutaneous endoscopic lumbar discectomy (PELD). We then explored the clinical performance of this prognostic prediction model via multimodal-data fusion. Clinical text data and radiological images of LDH patients who underwent PELD at the Intervertebral Disc Center of the Affiliated Hospital of Gansu University of Traditional Chinese Medicine (AHGUTCM; Lanzhou, China) were retrospectively collected. Two radiologists with clinical-image reading experience independently outlined regions of interest (ROI) on the MRI images and extracted radiomic features using 3D Slicer software. We then randomly separated the samples into a training set and a test set at a 7:3 ratio, used eight ML algorithms to construct predictive radiomic-feature models, evaluated model performance by the area under the curve (AUC), and selected the optimal model for screening radiomic features and calculating radiomic scores (Rad-scores). Finally, after using logistic regression to construct a nomogram for predicting the early-recurrence rate, we evaluated the nomogram's clinical applicability using a clinical-decision curve. We initially extracted 851 radiomic features. After constructing our models, we determined based on AUC values that the optimal ML algorithm was least absolute shrinkage and selection operator (LASSO) regression, which had an AUC of 0.76 and an accuracy rate of 91%. After screening features using the LASSO model, we predicted Rad-score for each sample of recurrent LDH using nine radiomic features. Next, we fused three of these clinical features -age, diabetes, and heavy manual labor-to construct a nomogram with an AUC of 0.86 (95% confidence interval [CI], 0.79-0.94). Analysis of the clinical-decision and impact curves showed that the prognostic prediction model with multimodal-data fusion had good clinical validity and applicability. We developed and analyzed a prognostic prediction model for LDH with multimodal-data fusion. Our model demonstrated good performance in predicting early postoperative recurrence in LDH patients; therefore, it has good prospects for clinical application and can provide clinicians with objective, accurate information to help them decide on presurgical treatment plans. However, external-validation studies are still needed to further validate the model's comprehensive performance and improve its generalization and extrapolation.

Leveraging Representation Learning for Bi-parametric Prostate MRI to Disambiguate PI-RADS 3 and Improve Biopsy Decision Strategies.

Umapathy L, Johnson PM, Dutt T, Tong A, Chopra S, Sodickson DK, Chandarana H

pubmed logopapersJun 30 2025
Despite its high negative predictive value (NPV) for clinically significant prostate cancer (csPCa), MRI suffers from a substantial number of false positives, especially for intermediate-risk cases. In this work, we determine whether a deep learning model trained with PI-RADS-guided representation learning can disambiguate the PI-RADS 3 classification, detect csPCa from bi-parametric prostate MR images, and avoid unnecessary benign biopsies. This study included 28,263 MR examinations and radiology reports from 21,938 men imaged for known or suspected prostate cancer between 2015 and 2023 at our institution (21 imaging locations with 34 readers), with 6352 subsequent biopsies. We trained a deep learning model, a representation learner (RL), to learn how radiologists interpret conventionally acquired T2-weighted and diffusion-weighted MR images, using exams in which the radiologists are confident in their risk assessments (PI-RADS 1 and 2 for the absence of csPCa vs. PI-RADS 4 and 5 for the presence of csPCa, n=21,465). We then trained biopsy-decision models to detect csPCa (Gleason score ≥7) using these learned image representations, and compared them to the performance of radiologists, and of models trained on other clinical variables (age, prostate volume, PSA, and PSA density) for treatment-naïve test cohorts consisting of only PI-RADS 3 (n=253, csPCa=103) and all PI-RADS (n=531, csPCa=300) cases. On the 2 test cohorts (PI-RADS-3-only, all-PI-RADS), RL-based biopsy-decision models consistently yielded higher AUCs in detecting csPCa (AUC=0.73 [0.66, 0.79], 0.88 [0.85, 0.91]) compared with radiologists (equivocal, AUC=0.79 [0.75, 0.83]) and the clinical model (AUCs=0.69 [0.62, 0.75], 0.78 [0.74, 0.82]). In the PIRADS-3-only cohort, all of whom would be biopsied using our institution's standard of care, the RL decision model avoided 41% (62/150) of benign biopsies compared with the clinical model (26%, P<0.001), and improved biopsy yield by 10% compared with the PI-RADS ≥3 decision strategy (0.50 vs. 0.40). Furthermore, on the all-PI-RADS cohort, RL decision model avoided 27% of additional benign biopsies (138/231) compared to radiologists (33%, P<0.001) with comparable sensitivity (93% vs. 92%), higher NPV (0.87 vs. 0.77), and biopsy yield (0.75 vs. 0.64). The combination of clinical and RL decision models further avoided benign biopsies (46% in PI-RADS-3-only and 62% in all-PI-RADS) while improving NPV (0.82, 0.88) and biopsy yields (0.52, 0.76) across the 2 test cohorts. Our PI-RADS-guided deep learning RL model learns summary representations from bi-parametric prostate MR images that can provide additional information to disambiguate intermediate-risk PI-RADS 3 assessments. The resulting RL-based biopsy decision models also outperformed radiologists in avoiding benign biopsies while maintaining comparable sensitivity to csPCa for the all-PI-RADS cohort. Such AI models can easily be integrated into clinical practice to supplement radiologists' reads in general and improve biopsy yield for any equivocal decisions.

Cost-effectiveness analysis of artificial intelligence (AI) in earlier detection of liver lesions in cirrhotic patients at risk of hepatocellular carcinoma in Italy.

Maas L, Contreras-Meca C, Ghezzo S, Belmans F, Corsi A, Cant J, Vos W, Bobowicz M, Rygusik M, Laski DK, Annemans L, Hiligsmann M

pubmed logopapersJun 30 2025
Hepatocellular carcinoma (HCC) is the fifth most common cancer worldwide and the third most common cause of cancer-related death. Cirrhosis is a major contributing factor, accounting for over 90% of HCC cases. With the high mortality rate of HCC, earlier detection of HCC is critical. When added to magnetic resonance imaging (MRI), artificial intelligence (AI) has been shown to improve HCC detection. Nonetheless, to date no cost-effectiveness analyses have been conducted on an AI tool to enhance earlier HCC detection. This study reports on the cost-effectiveness of detection of liver lesions with AI improved MRI in the surveillance for HCC in patients with a cirrhotic liver compared to usual care (UC). The model structure included a decision tree followed by a state-transition Markov model from an Italian healthcare perspective. Lifetime costs and quality-adjusted life years (QALY) were simulated in cirrhotic patients at risk of HCC. One-way sensitivity analyses and two-way sensitivity analyses were performed. Results were presented as incremental cost-effectiveness ratios (ICER). For patients receiving UC, the average lifetime costs per 1,000 patients were €16,604,800 compared to €16,610,250 for patients receiving the AI approach. With a QALY gained of 0.55 and incremental costs of €5,000 for every 1,000 patients, the ICER was €9,888 per QALY gained, indicating cost-effectiveness with the willingness-to-pay threshold of €33,000/QALY gained. Main drivers of cost-effectiveness included the cost and performance (sensitivity and specificity) of the AI tool. This study suggests that an AI-based approach to earlier detect HCC in cirrhotic patients can be cost-effective. By incorporating cost-effective AI-based approaches in clinical practice, patient outcomes and healthcare efficiency are improved.

Improving Robustness and Reliability in Medical Image Classification with Latent-Guided Diffusion and Nested-Ensembles.

Shen X, Huang H, Nichyporuk B, Arbel T

pubmed logopapersJun 30 2025
Once deployed, medical image analysis methods are often faced with unexpected image corruptions and noise perturbations. These unknown covariate shifts present significant challenges to deep learning based methods trained on "clean" images. This often results in unreliable predictions and poorly calibrated confidence, hence hindering clinical applicability. While recent methods have been developed to address specific issues such as confidence calibration or adversarial robustness, no single framework effectively tackles all these challenges simultaneously. To bridge this gap, we propose LaDiNE, a novel ensemble learning method combining the robustness of Vision Transformers with diffusion-based generative models for improved reliability in medical image classification. Specifically, transformer encoder blocks are used as hierarchical feature extractors that learn invariant features from images for each ensemble member, resulting in features that are robust to input perturbations. In addition, diffusion models are used as flexible density estimators to estimate member densities conditioned on the invariant features, leading to improved modeling of complex data distributions while retaining properly calibrated confidence. Extensive experiments on tuberculosis chest X-rays and melanoma skin cancer datasets demonstrate that LaDiNE achieves superior performance compared to a wide range of state-of-the-art methods by simultaneously improving prediction accuracy and confidence calibration under unseen noise, adversarial perturbations, and resolution degradation.
Page 52 of 2252247 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.