Sort by:
Page 55 of 2172168 results

CT-free attenuation and Monte-Carlo based scatter correction-guided quantitative <sup>90</sup>Y-SPECT imaging for improved dose calculation using deep learning.

Mansouri Z, Salimi Y, Wolf NB, Mainta I, Zaidi H

pubmed logopapersJul 1 2025
This work aimed to develop deep learning (DL) models for CT-free attenuation and Monte Carlo-based scatter correction (AC, SC) in quantitative <sup>90</sup>Y SPECT imaging for improved dose calculation. Data of 190 patients who underwent <sup>90</sup>Y selective internal radiation therapy (SIRT) with glass microspheres was studied. Voxel-level dosimetry was performed on uncorrected and corrected SPECT images using the local energy deposition method. Three deep learning models were trained individually for AC, SC, and joint ASC using a modified 3D shifted-window UNet Transformer (Swin UNETR) architecture. Corrected and unorrected dose maps served as reference and as inputs, respectively. The data was split into train set (~ 80%) and unseen test set (~ 20%). Training was conducted in a five-fold cross-validation scheme. The trained models were tested on the unseen test set. The model's performance was thoroughly evaluated by comparing organ- and voxel-level dosimetry results between the reference and DL-generated dose maps on the unseen test dataset. The voxel and organ-level evaluations also included Gamma analysis with three different distances to agreement (DTA (mm)) and dose difference (DD (%)) criteria to explore suitable criteria in SIRT dosimetry using SPECT. The average ± SD of the voxel-level quantitative metrics for AC task, are mean error (ME (Gy)): -0.026 ± 0.06, structural similarity index (SSIM (%)): 99.5 ± 0.25, and peak signal to noise ratio (PSNR (dB)): 47.28 ± 3.31. These values for SC task are - 0.014 ± 0.05, 99.88 ± 0.099, 55.9 ± 4, respectively. For ASC task, these values are as follows: -0.04 ± 0.06, 99.57 ± 0.33, 47.97 ± 3.6, respectively. The results of voxel level gamma evaluations with three different criteria, namely "DTA: 4.79, DD: 1%", "DTA:10 mm, DD: 5%", and "DTA: 15 mm, DD:10%" were around 98%. The mean absolute error (MAE (Gy)) for tumor and whole normal liver across tasks are as follows: 7.22 ± 5.9 and 1.09 ± 0.86 for AC, 8 ± 9.3 and 0.9 ± 0.8 for SC, and 11.8 ± 12.02 and 1.3 ± 0.98 for ASC, respectively. We developed multiple models for three different clinically scenarios, namely AC, SC, and ASC using the patient-specific Monte Carlo scatter corrected and CT-based attenuation corrected images. These task-specific models could be beneficial to perform the essential corrections where the CT images are either not available or not reliable due to misalignment, after training with a larger dataset.

Cerebrovascular morphology: Insights into normal variations, aging effects and disease implications.

Deshpande A, Zhang LQ, Balu R, Yahyavi-Firouz-Abadi N, Badjatia N, Laksari K, Tahsili-Fahadan P

pubmed logopapersJul 1 2025
Cerebrovascular morphology plays a critical role in brain health, influencing cerebral blood flow (CBF) and contributing to the pathogenesis of various neurological diseases. This review examines the anatomical structure of the cerebrovascular network and its variations in healthy and diseased populations and highlights age-related changes and their implications in various neurological conditions. Normal variations, including the completeness and anatomical anomalies of the Circle of Willis and collateral circulation, are discussed in relation to their impact on CBF and susceptibility to ischemic events. Age-related changes in the cerebrovascular system, such as alterations in vessel geometry and density, are explored for their contributions to age-related neurological disorders, including Alzheimer's disease and vascular dementia. Advances in medical imaging and computational methods have enabled automatic quantitative assessment of cerebrovascular structures, facilitating the identification of pathological changes in both acute and chronic cerebrovascular disorders. Emerging technologies, including machine learning and computational fluid dynamics, offer new tools for predicting disease risk and patient outcomes based on vascular morphology. This review underscores the importance of understanding cerebrovascular remodeling for early diagnosis and the development of novel therapeutic approaches in brain diseases.

Multi-scale geometric transformer for sparse-view X-ray 3D foot reconstruction.

Wang W, An L, Han G

pubmed logopapersJul 1 2025
Sparse-View X-ray 3D Foot Reconstruction aims to reconstruct the three-dimensional structure of the foot from sparse-view X-ray images, a challenging task due to data sparsity and limited viewpoints. This paper presents a novel method using a multi-scale geometric Transformer to enhance reconstruction accuracy and detail representation. Geometric position encoding technology and a window mechanism are introduced to divide X-ray images into local areas, finely capturing local features. A multi-scale Transformer module based on Neural Radiance Fields (NeRF) enhances the model's ability to express and capture details in complex structures. An adaptive weight learning strategy further optimizes the Transformer's feature extraction and long-range dependency modelling. Experimental results demonstrate that the proposed method significantly improves the reconstruction accuracy and detail preservation of the foot structure under sparse-view X-ray conditions. The multi-scale geometric Transformer effectively captures local and global features, leading to more accurate and detailed 3D reconstructions. The proposed method advances medical image reconstruction, significantly improving the accuracy and detail preservation of 3D foot reconstructions from sparse-view X-ray images.

Machine learning-based model to predict long-term tumor control and additional interventions following pituitary surgery for Cushing's disease.

Shinya Y, Ghaith AK, Hong S, Erickson D, Bancos I, Herndon JS, Davidge-Pitts CJ, Nguyen RT, Bon Nieves A, Sáez Alegre M, Morshed RA, Pinheiro Neto CD, Peris Celda M, Pollock BE, Meyer FB, Atkinson JLD, Van Gompel JJ

pubmed logopapersJul 1 2025
In this study, the authors aimed to establish a supervised machine learning (ML) model based on multiple tree-based algorithms to predict long-term biochemical outcomes and intervention-free survival (IFS) after endonasal transsphenoidal surgery (ETS) in patients with Cushing's disease (CD). The medical records of patients who underwent ETS for CD between 2013 and 2023 were reviewed. Data were collected on the patient's baseline characteristics, intervention details, histopathology, surgical outcomes, and postoperative endocrine functions. The study's primary outcome was IFS, and the therapeutic outcomes were labeled as "under control" or "treatment failure," depending on whether additional therapeutic interventions after primary ETS were required. The decision tree and random forest classifiers were trained and tested to predict long-term IFS based on unseen data, using an 80/20 cohort split. Data from 150 patients, with a median follow-up period of 56 months, were extracted. In the cohort, 42 (28%) patients required additional intervention for persistent or recurrent CD. Consequently, the IFS rates following ETS alone were 83% at 3 years and 78% at 5 years. Multivariable Cox proportional hazards analysis demonstrated that a smaller tumor diameter that could be detected by MRI (hazard ratio 0.95, 95% CI 0.90-0.99; p = 0.047) was significantly associated with greater IFS. However, the lack of tumor detection on MRI was a poor predictor. The ML-based model using a decision tree model displayed 91% accuracy (95% CI 0.70-0.94, sensitivity 87.0%, specificity 89.0%) in predicting IFS in the unseen test dataset. Random forest analysis revealed that tumor size (mean minimal depth 1.67), Knosp grade (1.75), patient age (1.80), and BMI (1.99) were the four most significant predictors of long-term IFS. The ML algorithm could predict long-term postoperative endocrinological remission in CD with high accuracy, indicating that prognosis may vary not only with previously reported factors such as tumor size, Knosp grade, gross-total resection, and patient age but also with BMI. The decision tree flowchart could potentially stratify patients with CD before ETS, allowing for the selection of personalized treatment options and thereby assisting in determining treatment plans for these patients. This ML model may lead to a deeper understanding of the complex mechanisms of CD by uncovering patterns embedded within the data.

Deep learning-based segmentation of the trigeminal nerve and surrounding vasculature in trigeminal neuralgia.

Halbert-Elliott KM, Xie ME, Dong B, Das O, Wang X, Jackson CM, Lim M, Huang J, Yedavalli VS, Bettegowda C, Xu R

pubmed logopapersJul 1 2025
Preoperative workup of trigeminal neuralgia (TN) consists of identification of neurovascular features on MRI. In this study, the authors apply and evaluate the performance of deep learning models for segmentation of the trigeminal nerve and surrounding vasculature to quantify anatomical features of the nerve and vessels. Six U-Net-based neural networks, each with a different encoder backbone, were trained to label constructive interference in steady-state MRI voxels as nerve, vasculature, or background. A retrospective dataset of 50 TN patients at the authors' institution who underwent preoperative high-resolution MRI in 2022 was utilized to train and test the models. Performance was measured by the Dice coefficient and intersection over union (IoU) metrics. Anatomical characteristics, such as surface area of neurovascular contact and distance to the contact point, were computed and compared between the predicted and ground truth segmentations. Of the evaluated models, the best performing was U-Net with an SE-ResNet50 backbone (Dice score = 0.775 ± 0.015, IoU score = 0.681 ± 0.015). When the SE-ResNet50 backbone was used, the average surface area of neurovascular contact in the testing dataset was 6.90 mm2, which was not significantly different from the surface area calculated from manual segmentation (p = 0.83). The average calculated distance from the brainstem to the contact point was 4.34 mm, which was also not significantly different from manual segmentation (p = 0.29). U-Net-based neural networks perform well for segmenting trigeminal nerve and vessels from preoperative MRI volumes. This technology enables the development of quantitative and objective metrics for radiographic evaluation of TN.

Enhanced diagnostic and prognostic assessment of cardiac amyloidosis using combined <sup>11</sup>C-PiB PET/CT and <sup>99m</sup>Tc-DPD scintigraphy.

Hong Z, Spielvogel CP, Xue S, Calabretta R, Jiang Z, Yu J, Kluge K, Haberl D, Nitsche C, Grünert S, Hacker M, Li X

pubmed logopapersJul 1 2025
Cardiac amyloidosis (CA) is a severe condition characterized by amyloid fibril deposition in the myocardium, leading to restrictive cardiomyopathy and heart failure. Differentiating between amyloidosis subtypes is crucial due to distinct treatment strategies. The individual conventional diagnostic methods lack the accuracy needed for effective subtype identification. This study aimed to evaluate the efficacy of combining <sup>11</sup>C-PiB PET/CT and <sup>99m</sup>Tc-DPD scintigraphy in detecting CA and distinguishing between its main subtypes, light chain (AL) and transthyretin (ATTR) amyloidosis while assessing the association of imaging findings with patient prognosis. We retrospectively evaluated the diagnostic efficacy of combining <sup>11</sup>C-PiB PET/CT and <sup>99m</sup>Tc-DPD scintigraphy in a cohort of 50 patients with clinical suspicion of CA. Semi-quantitative imaging markers were extracted from the images. Diagnostic performance was calculated against biopsy results or genetic testing. Both machine learning models and a rationale-based model were developed to detect CA and classify subtypes. Survival prediction over five years was assessed using a random survival forest model. Prognostic value was assessed using Kaplan-Meier estimators and Cox proportional hazards models. The combined imaging approach significantly improved diagnostic accuracy, with <sup>11</sup>C-PiB PET and <sup>99m</sup>Tc-DPD scintigraphy showing complementary strengths in detecting AL and ATTR, respectively. The machine learning model achieved an AUC of 0.94 (95% CI 0.93-0.95) for CA subtype differentiation, while the rationale-based model demonstrated strong diagnostic ability with AUCs of 0.95 (95% CI 0.88-1.00) for ATTR and 0.88 (95% CI 0.770-0.961) for AL. Survival prediction models identified key prognostic markers, with significant stratification of overall mortality based on predicted survival (p value = 0.006; adj HR 2.43 [95% CI 1.03-5.71]). The integration of <sup>11</sup>C-PiB PET/CT and <sup>99m</sup>Tc-DPD scintigraphy, supported by both machine learning and rationale-based models, enhances the diagnostic accuracy and prognostic assessment of cardiac amyloidosis, with significant implications for clinical practice.

Regression modeling with convolutional neural network for predicting extent of resection from preoperative MRI in giant pituitary adenomas: a pilot study.

Patel BK, Tariciotti L, DiRocco L, Mandile A, Lohana S, Rodas A, Zohdy YM, Maldonado J, Vergara SM, De Andrade EJ, Revuelta Barbero JM, Reyes C, Solares CA, Garzon-Muvdi T, Pradilla G

pubmed logopapersJul 1 2025
Giant pituitary adenomas (GPAs) are challenging skull base tumors due to their size and proximity to critical neurovascular structures. Achieving gross-total resection (GTR) can be difficult, and residual tumor burden is commonly reported. This study evaluated the ability of convolutional neural networks (CNNs) to predict the extent of resection (EOR) from preoperative MRI with the goals of enhancing surgical planning, improving preoperative patient counseling, and enhancing multidisciplinary postoperative coordination of care. A retrospective study of 100 consecutive patients with GPAs was conducted. Patients underwent surgery via the endoscopic endonasal transsphenoidal approach. CNN models were trained on DICOM images from preoperative MR images to predict EOR, using a split of 80 patients for training and 20 for validation. The models included different architectural modules to refine image selection and predict EOR based on tumor-contained images in various anatomical planes. The model design, training, and validation were conducted in a local environment in Python using the TensorFlow machine learning system. The median preoperative tumor volume was 19.4 cm3. The median EOR was 94.5%, with GTR achieved in 49% of cases. The CNN model showed high predictive accuracy, especially when analyzing images from the coronal plane, with a root mean square error of 2.9916 and a mean absolute error of 2.6225. The coefficient of determination (R2) was 0.9823, indicating excellent model performance. CNN-based models may effectively predict the EOR for GPAs from preoperative MRI scans, offering a promising tool for presurgical assessment and patient counseling. Confirmatory studies with large patient samples are needed to definitively validate these findings.

Deep learning image enhancement algorithms in PET/CT imaging: a phantom and sarcoma patient radiomic evaluation.

Bonney LM, Kalisvaart GM, van Velden FHP, Bradley KM, Hassan AB, Grootjans W, McGowan DR

pubmed logopapersJul 1 2025
PET/CT imaging data contains a wealth of quantitative information that can provide valuable contributions to characterising tumours. A growing body of work focuses on the use of deep-learning (DL) techniques for denoising PET data. These models are clinically evaluated prior to use, however, quantitative image assessment provides potential for further evaluation. This work uses radiomic features to compare two manufacturer deep-learning (DL) image enhancement algorithms, one of which has been commercialised, against 'gold-standard' image reconstruction techniques in phantom data and a sarcoma patient data set (N=20). All studies in the retrospective sarcoma clinical [ <math xmlns="http://www.w3.org/1998/Math/MathML"><mmultiscripts><mrow></mrow> <mrow></mrow> <mn>18</mn></mmultiscripts> </math> F]FDG dataset were acquired on either a GE Discovery 690 or 710 PET/CT scanner with volumes segmented by an experienced nuclear medicine radiologist. The modular heterogeneous imaging phantom used in this work was filled with [ <math xmlns="http://www.w3.org/1998/Math/MathML"><mmultiscripts><mrow></mrow> <mrow></mrow> <mn>18</mn></mmultiscripts> </math> F]FDG, and five repeat acquisitions of the phantom were acquired on a GE Discovery 710 PET/CT scanner. The DL-enhanced images were compared to 'gold-standard' images the algorithms were trained to emulate and input images. The difference between image sets was tested for significance in 93 international biomarker standardisation initiative (IBSI) standardised radiomic features. Comparing DL-enhanced images to the 'gold-standard', 4.0% and 9.7% radiomic features measured significantly different (p<sub>critical</sub> < 0.0005) in the phantom and patient data respectively (averaged over the two DL algorithms). Larger differences were observed comparing DL-enhanced images to algorithm input images with 29.8% and 43.0% of radiomic features measuring significantly different in the phantom and patient data respectively (averaged over the two DL algorithms). DL-enhanced images were found to be similar to images generated using the 'gold-standard' target image reconstruction method with more than 80% of radiomic features not significantly different in all comparisons across unseen phantom and sarcoma patient data. This result offers insight into the performance of the DL algorithms, and demonstrate potential applications for DL algorithms in harmonisation for radiomics and for radiomic features in quantitative evaluation of DL algorithms.

A Review of the Opportunities and Challenges with Large Language Models in Radiology: The Road Ahead.

Soni N, Ora M, Agarwal A, Yang T, Bathla G

pubmed logopapersJul 1 2025
In recent years, generative artificial intelligence (AI), particularly large language models (LLMs) and their multimodal counterparts, multimodal large language models, including vision language models, have generated considerable interest in the global AI discourse. LLMs, or pre-trained language models (such as ChatGPT, Med-PaLM, LLaMA), are neural network architectures trained on extensive text data, excelling in language comprehension and generation. Multimodal LLMs, a subset of foundation models, are trained on multimodal data sets, integrating text with another modality, such as images, to learn universal representations akin to human cognition better. This versatility enables them to excel in tasks like chatbots, translation, and creative writing while facilitating knowledge sharing through transfer learning, federated learning, and synthetic data creation. Several of these models can have potentially appealing applications in the medical domain, including, but not limited to, enhancing patient care by processing patient data; summarizing reports and relevant literature; providing diagnostic, treatment, and follow-up recommendations; and ancillary tasks like coding and billing. As radiologists enter this promising but uncharted territory, it is imperative for them to be familiar with the basic terminology and processes of LLMs. Herein, we present an overview of the LLMs and their potential applications and challenges in the imaging domain.

Enhancing Magnetic Resonance Imaging (MRI) Report Comprehension in Spinal Trauma: Readability Analysis of AI-Generated Explanations for Thoracolumbar Fractures.

Sing DC, Shah KS, Pompliano M, Yi PH, Velluto C, Bagheri A, Eastlack RK, Stephan SR, Mundis GM

pubmed logopapersJul 1 2025
Magnetic resonance imaging (MRI) reports are challenging for patients to interpret and may subject patients to unnecessary anxiety. The advent of advanced artificial intelligence (AI) large language models (LLMs), such as GPT-4o, hold promise for translating complex medical information into layman terms. This paper aims to evaluate the accuracy, helpfulness, and readability of GPT-4o in explaining MRI reports of patients with thoracolumbar fractures. MRI reports of 20 patients presenting with thoracic or lumbar vertebral body fractures were obtained. GPT-4o was prompted to explain the MRI report in layman's terms. The generated explanations were then presented to 7 board-certified spine surgeons for evaluation on the reports' helpfulness and accuracy. The MRI report text and GPT-4o explanations were then analyzed to grade the readability of the texts using the Flesch Readability Ease Score (FRES) and Flesch-Kincaid Grade Level (FKGL) Scale. The layman explanations provided by GPT-4o were found to be helpful by all surgeons in 17 cases, with 6 of 7 surgeons finding the information helpful in the remaining 3 cases. ChatGPT-generated layman reports were rated as "accurate" by all 7 surgeons in 11/20 cases (55%). In an additional 5/20 cases (25%), 6 out of 7 surgeons agreed on their accuracy. In the remaining 4/20 cases (20%), accuracy ratings varied, with 4 or 5 surgeons considering them accurate. Review of surgeon feedback on inaccuracies revealed that the radiology reports were often insufficiently detailed. The mean FRES score of the MRI reports was significantly lower than the GPT-4o explanations (32.15, SD 15.89 vs 53.9, SD 7.86; P<.001). The mean FKGL score of the MRI reports trended higher compared to the GPT-4o explanations (11th-12th grade vs 10th-11th grade level; P=.11). Overall helpfulness and readability ratings for AI-generated summaries of MRI reports were high, with few inaccuracies recorded. This study demonstrates the potential of GPT-4o to serve as a valuable tool for enhancing patient comprehension of MRI report findings.
Page 55 of 2172168 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.