Sort by:
Page 1 of 15 results

Accuracy of an Automated Bone Scan Index Measurement System Enhanced by Deep Learning of the Female Skeletal Structure in Patients with Breast Cancer.

Fukai S, Daisaki H, Yamashita K, Kuromori I, Motegi K, Umeda T, Shimada N, Takatsu K, Terauchi T, Koizumi M

pubmed logopapersJun 1 2025
VSBONE<sup>®</sup> BSI (VSBONE), an automated bone scan index (BSI) measurement system was updated from version 2.1 (ver.2) to 3.0 (ver.3). VSBONE ver.3 incorporates deep learning of the skeletal structures of 957 new women, and it can be applied in patients with breast cancer. However, the performance of the updated VSBONE remains unclear. This study aimed to validate the diagnostic accuracy of the VSBONE system in patients with breast cancer. In total, 220 Japanese patients with breast cancer who underwent bone scintigraphy with single-photon emission computed tomography/computed tomography (SPECT/CT) were retrospectively analyzed. The patients were diagnosed with active bone metastases (<i>n</i> = 20) and non-bone metastases (<i>n</i> = 200) according to the physician's radiographic image interpretation. The patients were assessed using the VSBONE ver.2 and VSBONE ver.3, and the BSI findings were compared with the interpretation results by the physicians. The occurrence of segmentation errors, the association of BSI between VSBONE ver.2 and VSBONE ver.3, and the diagnostic accuracy of the systems were evaluated. VSBONE ver.2 and VSBONE ver.3 had segmentation errors in four and two patients. Significant positive linear correlations were confirmed in both versions of the BSI (<i>r</i> = 0.92). The diagnostic accuracy was 54.1% in VSBOBE ver.2, and 80.5% in VSBONE ver.3 <i>(P</i> < 0.001), respectively. The diagnostic accuracy of VSBONE was improved through deep learning of the female skeletal structures. The updated VSBONE ver.3 can be a reliable automated system for measuring BSI in patients with breast cancer.

Generative artificial intelligence enables the generation of bone scintigraphy images and improves generalization of deep learning models in data-constrained environments.

Haberl D, Ning J, Kluge K, Kumpf K, Yu J, Jiang Z, Constantino C, Monaci A, Starace M, Haug AR, Calabretta R, Camoni L, Bertagna F, Mascherbauer K, Hofer F, Albano D, Sciagra R, Oliveira F, Costa D, Nitsche C, Hacker M, Spielvogel CP

pubmed logopapersJun 1 2025
Advancements of deep learning in medical imaging are often constrained by the limited availability of large, annotated datasets, resulting in underperforming models when deployed under real-world conditions. This study investigated a generative artificial intelligence (AI) approach to create synthetic medical images taking the example of bone scintigraphy scans, to increase the data diversity of small-scale datasets for more effective model training and improved generalization. We trained a generative model on <sup>99m</sup>Tc-bone scintigraphy scans from 9,170 patients in one center to generate high-quality and fully anonymized annotated scans of patients representing two distinct disease patterns: abnormal uptake indicative of (i) bone metastases and (ii) cardiac uptake indicative of cardiac amyloidosis. A blinded reader study was performed to assess the clinical validity and quality of the generated data. We investigated the added value of the generated data by augmenting an independent small single-center dataset with synthetic data and by training a deep learning model to detect abnormal uptake in a downstream classification task. We tested this model on 7,472 scans from 6,448 patients across four external sites in a cross-tracer and cross-scanner setting and associated the resulting model predictions with clinical outcomes. The clinical value and high quality of the synthetic imaging data were confirmed by four readers, who were unable to distinguish synthetic scans from real scans (average accuracy: 0.48% [95% CI 0.46-0.51]), disagreeing in 239 (60%) of 400 cases (Fleiss' kappa: 0.18). Adding synthetic data to the training set improved model performance by a mean (± SD) of 33(± 10)% AUC (p < 0.0001) for detecting abnormal uptake indicative of bone metastases and by 5(± 4)% AUC (p < 0.0001) for detecting uptake indicative of cardiac amyloidosis across both internal and external testing cohorts, compared to models without synthetic training data. Patients with predicted abnormal uptake had adverse clinical outcomes (log-rank: p < 0.0001). Generative AI enables the targeted generation of bone scintigraphy images representing different clinical conditions. Our findings point to the potential of synthetic data to overcome challenges in data sharing and in developing reliable and prognostic deep learning models in data-limited environments.

Artificial Intelligence Augmented Cerebral Nuclear Imaging.

Currie GM, Hawk KE

pubmed logopapersMay 28 2025
Artificial intelligence (AI), particularly machine learning (ML) and deep learning (DL), has significant potential to advance the capabilities of nuclear neuroimaging. The current and emerging applications of ML and DL in the processing, analysis, enhancement and interpretation of SPECT and PET imaging are explored for brain imaging. Key developments include automated image segmentation, disease classification, and radiomic feature extraction, including lower dimensionality first and second order radiomics, higher dimensionality third order radiomics and more abstract fourth order deep radiomics. DL-based reconstruction, attenuation correction using pseudo-CT generation, and denoising of low-count studies have a role in enhancing image quality. AI has a role in sustainability through applications in radioligand design and preclinical imaging while federated learning addresses data security challenges to improve research and development in nuclear cerebral imaging. There is also potential for generative AI to transform the nuclear cerebral imaging space through solutions to data limitations, image enhancement, patient-centered care, workflow efficiencies and trainee education. Innovations in ML and DL are re-engineering the nuclear neuroimaging ecosystem and reimagining tomorrow's precision medicine landscape.

Artificial Intelligence in Sincalide-Stimulated Cholescintigraphy: A Pilot Study.

Nguyen NC, Luo J, Arefan D, Vasireddi AK, Wu S

pubmed logopapersMay 13 2025
Sincalide-stimulated cholescintigraphy (SSC) calculates the gallbladder ejection fraction (GBEF) to diagnose functional gallbladder disorder. Currently, artificial intelligence (AI)-driven workflows that integrate real-time image processing and organ function calculation remain unexplored in nuclear medicine practice. This pilot study explored an AI-based application for gallbladder radioactivity tracking. We retrospectively analyzed 20 SSC exams, categorized into 10 easy and 10 challenging cases. Two human operators (H1 and H2) independently annotated the gallbladder regions of interest manually over the course of the 60-minute SSC. A U-Net-based deep learning model was developed to automatically segment gallbladder masks, and a 10-fold cross-validation was performed for both easy and challenging cases. The AI-generated masks were compared with human-annotated ones, with Dice similarity coefficients (DICE) used to assess agreement. AI achieved an average DICE of 0.746 against H1 and 0.676 against H2, performing better in easy cases (0.781) than in challenging ones (0.641). Visual inspection showed AI was prone to errors with patient motion or low-count activity. This study highlights AI's potential in real-time gallbladder tracking and GBEF calculation during SSC. AI-enabled real-time evaluation of nuclear imaging data holds promise for advancing clinical workflows by providing instantaneous organ function assessments and feedback to technologists. This AI-enabled workflow could enhance diagnostic efficiency, reduce scan duration, and improve patient comfort by alleviating symptoms associated with SSC, such as abdominal discomfort due to sincalide administration.

Transfer learning‑based attenuation correction in <sup>99m</sup>Tc-TRODAT-1 SPECT for Parkinson's disease using realistic simulation and clinical data.

Huang W, Jiang H, Du Y, Wang H, Sun H, Hung GU, Mok GSP

pubmed logopapersMay 6 2025
Dopamine transporter (DAT) SPECT is an effective tool for early Parkinson's disease (PD) detection and heavily hampered by attenuation. Attenuation correction (AC) is the most important correction among other corrections. Transfer learning (TL) with fine-tuning (FT) a pre-trained model has shown potential in enhancing deep learning (DL)-based AC methods. In this study, we investigate leveraging realistic Monte Carlo (MC) simulation data to create a pre-trained model for TL-based AC (TLAC) to improve AC performance for DAT SPECT. A total number of 200 digital brain phantoms with realistic <sup>99m</sup>Tc-TRODAT-1 distribution was used to generate realistic noisy SPECT projections using MC SIMIND program and an analytical projector. One hundred real clinical <sup>99m</sup>Tc-TRODAT-1 brain SPECT data were also retrospectively analyzed. All projections were reconstructed with and without CT-based attenuation correction (CTAC/NAC). A 3D conditional generative adversarial network (cGAN) was pre-trained using 200 pairs of simulated NAC and CTAC SPECT data. Subsequently, 8, 24, and 80 pairs of clinical NAC and CTAC DAT SPECT data were employed to fine-tune the pre-trained U-Net generator of cGAN (TLAC-MC). Comparisons were made against without FT (DLAC-MC), training on purely limited clinical data (DLAC-CLI), clinical data with data augmentation (DLAC-AUG), mixed MC and clinical data (DLAC-MIX), TL using analytical simulation data (TLAC-ANA), and Chang's AC (ChangAC). All datasets used for DL-based methods were split to 7/8 for training and 1/8 for validation, and a 1-/2-/5-fold cross-validation were applied to test all 100 clinical datasets, depending on the numbers of clinical data used in the training model. With 8 available clinical datasets, TLAC-MC achieved the best result in Normalized Mean Squared Error (NMSE) and Structural Similarity Index Measure (SSIM) (TLAC-MC; NMSE = 0.0143 ± 0.0082/SSIM = 0.9355 ± 0.0203), followed by DLAC-AUG, DLAC-MIX, TLAC-ANA, DLAC-CLI, DLAC-MC, ChangAC and NAC. Similar trends exist when increasing the number of clinical datasets. For TL-based AC methods, the fewer clinical datasets available for FT, the greater the improvement as compared to DLAC-CLI using the same number of clinical datasets for training. Joint histograms analysis and Bland-Altman plots of SBR results also demonstrate consistent findings. TLAC is feasible for DAT SPECT with a pre-trained model generated purely based on simulation data. TLAC-MC demonstrates superior performance over other DL-based AC methods, particularly when limited clinical datasets are available. The closer the pre-training data is to the target domain, the better the performance of the TLAC model.
Page 1 of 15 results
Show
per page
1
Get Started

Upload your X-ray image and get interpretation.

Upload now →

Disclaimer: X-ray Interpreter's AI-generated results are for informational purposes only and not a substitute for professional medical advice. Always consult a healthcare professional for medical diagnosis and treatment.