Sort by:
Page 51 of 2132126 results

Multi-scale geometric transformer for sparse-view X-ray 3D foot reconstruction.

Wang W, An L, Han G

pubmed logopapersJul 1 2025
Sparse-View X-ray 3D Foot Reconstruction aims to reconstruct the three-dimensional structure of the foot from sparse-view X-ray images, a challenging task due to data sparsity and limited viewpoints. This paper presents a novel method using a multi-scale geometric Transformer to enhance reconstruction accuracy and detail representation. Geometric position encoding technology and a window mechanism are introduced to divide X-ray images into local areas, finely capturing local features. A multi-scale Transformer module based on Neural Radiance Fields (NeRF) enhances the model's ability to express and capture details in complex structures. An adaptive weight learning strategy further optimizes the Transformer's feature extraction and long-range dependency modelling. Experimental results demonstrate that the proposed method significantly improves the reconstruction accuracy and detail preservation of the foot structure under sparse-view X-ray conditions. The multi-scale geometric Transformer effectively captures local and global features, leading to more accurate and detailed 3D reconstructions. The proposed method advances medical image reconstruction, significantly improving the accuracy and detail preservation of 3D foot reconstructions from sparse-view X-ray images.

Machine learning-based model to predict long-term tumor control and additional interventions following pituitary surgery for Cushing's disease.

Shinya Y, Ghaith AK, Hong S, Erickson D, Bancos I, Herndon JS, Davidge-Pitts CJ, Nguyen RT, Bon Nieves A, Sáez Alegre M, Morshed RA, Pinheiro Neto CD, Peris Celda M, Pollock BE, Meyer FB, Atkinson JLD, Van Gompel JJ

pubmed logopapersJul 1 2025
In this study, the authors aimed to establish a supervised machine learning (ML) model based on multiple tree-based algorithms to predict long-term biochemical outcomes and intervention-free survival (IFS) after endonasal transsphenoidal surgery (ETS) in patients with Cushing's disease (CD). The medical records of patients who underwent ETS for CD between 2013 and 2023 were reviewed. Data were collected on the patient's baseline characteristics, intervention details, histopathology, surgical outcomes, and postoperative endocrine functions. The study's primary outcome was IFS, and the therapeutic outcomes were labeled as "under control" or "treatment failure," depending on whether additional therapeutic interventions after primary ETS were required. The decision tree and random forest classifiers were trained and tested to predict long-term IFS based on unseen data, using an 80/20 cohort split. Data from 150 patients, with a median follow-up period of 56 months, were extracted. In the cohort, 42 (28%) patients required additional intervention for persistent or recurrent CD. Consequently, the IFS rates following ETS alone were 83% at 3 years and 78% at 5 years. Multivariable Cox proportional hazards analysis demonstrated that a smaller tumor diameter that could be detected by MRI (hazard ratio 0.95, 95% CI 0.90-0.99; p = 0.047) was significantly associated with greater IFS. However, the lack of tumor detection on MRI was a poor predictor. The ML-based model using a decision tree model displayed 91% accuracy (95% CI 0.70-0.94, sensitivity 87.0%, specificity 89.0%) in predicting IFS in the unseen test dataset. Random forest analysis revealed that tumor size (mean minimal depth 1.67), Knosp grade (1.75), patient age (1.80), and BMI (1.99) were the four most significant predictors of long-term IFS. The ML algorithm could predict long-term postoperative endocrinological remission in CD with high accuracy, indicating that prognosis may vary not only with previously reported factors such as tumor size, Knosp grade, gross-total resection, and patient age but also with BMI. The decision tree flowchart could potentially stratify patients with CD before ETS, allowing for the selection of personalized treatment options and thereby assisting in determining treatment plans for these patients. This ML model may lead to a deeper understanding of the complex mechanisms of CD by uncovering patterns embedded within the data.

Deep learning-based segmentation of the trigeminal nerve and surrounding vasculature in trigeminal neuralgia.

Halbert-Elliott KM, Xie ME, Dong B, Das O, Wang X, Jackson CM, Lim M, Huang J, Yedavalli VS, Bettegowda C, Xu R

pubmed logopapersJul 1 2025
Preoperative workup of trigeminal neuralgia (TN) consists of identification of neurovascular features on MRI. In this study, the authors apply and evaluate the performance of deep learning models for segmentation of the trigeminal nerve and surrounding vasculature to quantify anatomical features of the nerve and vessels. Six U-Net-based neural networks, each with a different encoder backbone, were trained to label constructive interference in steady-state MRI voxels as nerve, vasculature, or background. A retrospective dataset of 50 TN patients at the authors' institution who underwent preoperative high-resolution MRI in 2022 was utilized to train and test the models. Performance was measured by the Dice coefficient and intersection over union (IoU) metrics. Anatomical characteristics, such as surface area of neurovascular contact and distance to the contact point, were computed and compared between the predicted and ground truth segmentations. Of the evaluated models, the best performing was U-Net with an SE-ResNet50 backbone (Dice score = 0.775 ± 0.015, IoU score = 0.681 ± 0.015). When the SE-ResNet50 backbone was used, the average surface area of neurovascular contact in the testing dataset was 6.90 mm2, which was not significantly different from the surface area calculated from manual segmentation (p = 0.83). The average calculated distance from the brainstem to the contact point was 4.34 mm, which was also not significantly different from manual segmentation (p = 0.29). U-Net-based neural networks perform well for segmenting trigeminal nerve and vessels from preoperative MRI volumes. This technology enables the development of quantitative and objective metrics for radiographic evaluation of TN.

Enhanced diagnostic and prognostic assessment of cardiac amyloidosis using combined <sup>11</sup>C-PiB PET/CT and <sup>99m</sup>Tc-DPD scintigraphy.

Hong Z, Spielvogel CP, Xue S, Calabretta R, Jiang Z, Yu J, Kluge K, Haberl D, Nitsche C, Grünert S, Hacker M, Li X

pubmed logopapersJul 1 2025
Cardiac amyloidosis (CA) is a severe condition characterized by amyloid fibril deposition in the myocardium, leading to restrictive cardiomyopathy and heart failure. Differentiating between amyloidosis subtypes is crucial due to distinct treatment strategies. The individual conventional diagnostic methods lack the accuracy needed for effective subtype identification. This study aimed to evaluate the efficacy of combining <sup>11</sup>C-PiB PET/CT and <sup>99m</sup>Tc-DPD scintigraphy in detecting CA and distinguishing between its main subtypes, light chain (AL) and transthyretin (ATTR) amyloidosis while assessing the association of imaging findings with patient prognosis. We retrospectively evaluated the diagnostic efficacy of combining <sup>11</sup>C-PiB PET/CT and <sup>99m</sup>Tc-DPD scintigraphy in a cohort of 50 patients with clinical suspicion of CA. Semi-quantitative imaging markers were extracted from the images. Diagnostic performance was calculated against biopsy results or genetic testing. Both machine learning models and a rationale-based model were developed to detect CA and classify subtypes. Survival prediction over five years was assessed using a random survival forest model. Prognostic value was assessed using Kaplan-Meier estimators and Cox proportional hazards models. The combined imaging approach significantly improved diagnostic accuracy, with <sup>11</sup>C-PiB PET and <sup>99m</sup>Tc-DPD scintigraphy showing complementary strengths in detecting AL and ATTR, respectively. The machine learning model achieved an AUC of 0.94 (95% CI 0.93-0.95) for CA subtype differentiation, while the rationale-based model demonstrated strong diagnostic ability with AUCs of 0.95 (95% CI 0.88-1.00) for ATTR and 0.88 (95% CI 0.770-0.961) for AL. Survival prediction models identified key prognostic markers, with significant stratification of overall mortality based on predicted survival (p value = 0.006; adj HR 2.43 [95% CI 1.03-5.71]). The integration of <sup>11</sup>C-PiB PET/CT and <sup>99m</sup>Tc-DPD scintigraphy, supported by both machine learning and rationale-based models, enhances the diagnostic accuracy and prognostic assessment of cardiac amyloidosis, with significant implications for clinical practice.

Regression modeling with convolutional neural network for predicting extent of resection from preoperative MRI in giant pituitary adenomas: a pilot study.

Patel BK, Tariciotti L, DiRocco L, Mandile A, Lohana S, Rodas A, Zohdy YM, Maldonado J, Vergara SM, De Andrade EJ, Revuelta Barbero JM, Reyes C, Solares CA, Garzon-Muvdi T, Pradilla G

pubmed logopapersJul 1 2025
Giant pituitary adenomas (GPAs) are challenging skull base tumors due to their size and proximity to critical neurovascular structures. Achieving gross-total resection (GTR) can be difficult, and residual tumor burden is commonly reported. This study evaluated the ability of convolutional neural networks (CNNs) to predict the extent of resection (EOR) from preoperative MRI with the goals of enhancing surgical planning, improving preoperative patient counseling, and enhancing multidisciplinary postoperative coordination of care. A retrospective study of 100 consecutive patients with GPAs was conducted. Patients underwent surgery via the endoscopic endonasal transsphenoidal approach. CNN models were trained on DICOM images from preoperative MR images to predict EOR, using a split of 80 patients for training and 20 for validation. The models included different architectural modules to refine image selection and predict EOR based on tumor-contained images in various anatomical planes. The model design, training, and validation were conducted in a local environment in Python using the TensorFlow machine learning system. The median preoperative tumor volume was 19.4 cm3. The median EOR was 94.5%, with GTR achieved in 49% of cases. The CNN model showed high predictive accuracy, especially when analyzing images from the coronal plane, with a root mean square error of 2.9916 and a mean absolute error of 2.6225. The coefficient of determination (R2) was 0.9823, indicating excellent model performance. CNN-based models may effectively predict the EOR for GPAs from preoperative MRI scans, offering a promising tool for presurgical assessment and patient counseling. Confirmatory studies with large patient samples are needed to definitively validate these findings.

Deep learning image enhancement algorithms in PET/CT imaging: a phantom and sarcoma patient radiomic evaluation.

Bonney LM, Kalisvaart GM, van Velden FHP, Bradley KM, Hassan AB, Grootjans W, McGowan DR

pubmed logopapersJul 1 2025
PET/CT imaging data contains a wealth of quantitative information that can provide valuable contributions to characterising tumours. A growing body of work focuses on the use of deep-learning (DL) techniques for denoising PET data. These models are clinically evaluated prior to use, however, quantitative image assessment provides potential for further evaluation. This work uses radiomic features to compare two manufacturer deep-learning (DL) image enhancement algorithms, one of which has been commercialised, against 'gold-standard' image reconstruction techniques in phantom data and a sarcoma patient data set (N=20). All studies in the retrospective sarcoma clinical [ <math xmlns="http://www.w3.org/1998/Math/MathML"><mmultiscripts><mrow></mrow> <mrow></mrow> <mn>18</mn></mmultiscripts> </math> F]FDG dataset were acquired on either a GE Discovery 690 or 710 PET/CT scanner with volumes segmented by an experienced nuclear medicine radiologist. The modular heterogeneous imaging phantom used in this work was filled with [ <math xmlns="http://www.w3.org/1998/Math/MathML"><mmultiscripts><mrow></mrow> <mrow></mrow> <mn>18</mn></mmultiscripts> </math> F]FDG, and five repeat acquisitions of the phantom were acquired on a GE Discovery 710 PET/CT scanner. The DL-enhanced images were compared to 'gold-standard' images the algorithms were trained to emulate and input images. The difference between image sets was tested for significance in 93 international biomarker standardisation initiative (IBSI) standardised radiomic features. Comparing DL-enhanced images to the 'gold-standard', 4.0% and 9.7% radiomic features measured significantly different (p<sub>critical</sub> < 0.0005) in the phantom and patient data respectively (averaged over the two DL algorithms). Larger differences were observed comparing DL-enhanced images to algorithm input images with 29.8% and 43.0% of radiomic features measuring significantly different in the phantom and patient data respectively (averaged over the two DL algorithms). DL-enhanced images were found to be similar to images generated using the 'gold-standard' target image reconstruction method with more than 80% of radiomic features not significantly different in all comparisons across unseen phantom and sarcoma patient data. This result offers insight into the performance of the DL algorithms, and demonstrate potential applications for DL algorithms in harmonisation for radiomics and for radiomic features in quantitative evaluation of DL algorithms.

A Review of the Opportunities and Challenges with Large Language Models in Radiology: The Road Ahead.

Soni N, Ora M, Agarwal A, Yang T, Bathla G

pubmed logopapersJul 1 2025
In recent years, generative artificial intelligence (AI), particularly large language models (LLMs) and their multimodal counterparts, multimodal large language models, including vision language models, have generated considerable interest in the global AI discourse. LLMs, or pre-trained language models (such as ChatGPT, Med-PaLM, LLaMA), are neural network architectures trained on extensive text data, excelling in language comprehension and generation. Multimodal LLMs, a subset of foundation models, are trained on multimodal data sets, integrating text with another modality, such as images, to learn universal representations akin to human cognition better. This versatility enables them to excel in tasks like chatbots, translation, and creative writing while facilitating knowledge sharing through transfer learning, federated learning, and synthetic data creation. Several of these models can have potentially appealing applications in the medical domain, including, but not limited to, enhancing patient care by processing patient data; summarizing reports and relevant literature; providing diagnostic, treatment, and follow-up recommendations; and ancillary tasks like coding and billing. As radiologists enter this promising but uncharted territory, it is imperative for them to be familiar with the basic terminology and processes of LLMs. Herein, we present an overview of the LLMs and their potential applications and challenges in the imaging domain.

Enhancing Magnetic Resonance Imaging (MRI) Report Comprehension in Spinal Trauma: Readability Analysis of AI-Generated Explanations for Thoracolumbar Fractures.

Sing DC, Shah KS, Pompliano M, Yi PH, Velluto C, Bagheri A, Eastlack RK, Stephan SR, Mundis GM

pubmed logopapersJul 1 2025
Magnetic resonance imaging (MRI) reports are challenging for patients to interpret and may subject patients to unnecessary anxiety. The advent of advanced artificial intelligence (AI) large language models (LLMs), such as GPT-4o, hold promise for translating complex medical information into layman terms. This paper aims to evaluate the accuracy, helpfulness, and readability of GPT-4o in explaining MRI reports of patients with thoracolumbar fractures. MRI reports of 20 patients presenting with thoracic or lumbar vertebral body fractures were obtained. GPT-4o was prompted to explain the MRI report in layman's terms. The generated explanations were then presented to 7 board-certified spine surgeons for evaluation on the reports' helpfulness and accuracy. The MRI report text and GPT-4o explanations were then analyzed to grade the readability of the texts using the Flesch Readability Ease Score (FRES) and Flesch-Kincaid Grade Level (FKGL) Scale. The layman explanations provided by GPT-4o were found to be helpful by all surgeons in 17 cases, with 6 of 7 surgeons finding the information helpful in the remaining 3 cases. ChatGPT-generated layman reports were rated as "accurate" by all 7 surgeons in 11/20 cases (55%). In an additional 5/20 cases (25%), 6 out of 7 surgeons agreed on their accuracy. In the remaining 4/20 cases (20%), accuracy ratings varied, with 4 or 5 surgeons considering them accurate. Review of surgeon feedback on inaccuracies revealed that the radiology reports were often insufficiently detailed. The mean FRES score of the MRI reports was significantly lower than the GPT-4o explanations (32.15, SD 15.89 vs 53.9, SD 7.86; P<.001). The mean FKGL score of the MRI reports trended higher compared to the GPT-4o explanations (11th-12th grade vs 10th-11th grade level; P=.11). Overall helpfulness and readability ratings for AI-generated summaries of MRI reports were high, with few inaccuracies recorded. This study demonstrates the potential of GPT-4o to serve as a valuable tool for enhancing patient comprehension of MRI report findings.

Effective connectivity between the cerebellum and fronto-temporal regions correctly classify major depressive disorder: fMRI study using a multi-site dataset.

Dai P, Huang K, Shi Y, Xiong T, Zhou X, Liao S, Huang Z, Yi X, Grecucci A, Chen BT

pubmed logopapersJul 1 2025
Major Depressive Disorder (MDD) diagnosis mainly relies on subjective self-reporting and clinical assessments. Resting-state functional magnetic resonance imaging (rs-fMRI) and its analysis of Effective Connectivity (EC) offer a quantitative approach to understand the directional interactions between brain regions, presenting a potential objective method for MDD classification. Granger causality analysis was used to extract EC features from a large, multi-site rs-fMRI dataset of MDD patients. The ComBat algorithm was applied to adjust for site differences, while multivariate linear regression was employed to control for age and sex differences. Discriminative EC features for MDD were identified using two-sample t-tests and model-based feature selection, with the LightGBM algorithm being used for classification. The performance and generalizability of the model was evaluated using nested five-fold cross-validation and tested for generalizability on an independent dataset. Ninety-seven EC features belonging to the cerebellum and front-temporal regions were identified as highly discriminative for MDD. The classification model using these features achieved an accuracy of 94.35 %, with a sensitivity of 93.52 % and specificity of 95.25 % in cross-validation. Generalization of the model to an independent dataset resulted in an accuracy of 94.74 %, sensitivity of 90.59 %, and specificity of 96.75 %. The study demonstrates that EC features from rs-fMRI can effectively discriminate MDD from healthy controls, suggesting that EC analysis could be a valuable tool in assisting the clinical diagnosis of MDD. This method shows promise in enhancing the objectivity of MDD diagnosis through the use of neuroimaging biomarkers.

The power spectrum map of gyro-sulcal functional activity dissociation in macaque brains.

Sun Y, Zhou J, Mao W, Zhang W, Zhao B, Duan X, Zhang S, Zhang T, Jiang X

pubmed logopapersJul 1 2025
Nonhuman primates, particularly rhesus macaques, have served as crucial animal models for investigating complex brain functions. While previous studies have explored neural activity features in macaques, the gyro-sulcal functional dissociation characteristics are largely unknown. In this study, we employ a deep learning model named one-dimensional convolutional neural network to differentiate resting state functional magnetic resonance imaging signals between gyri and sulci in macaque brains, and further investigate the frequency-specific dissociations between gyri and sulci inferred from the power spectral density of resting state functional magnetic resonance imaging. Experimental results based on a large cohort of 440 macaques from two independent sites demonstrate substantial frequency-specific dissociation between gyral and sulcal signals at both whole-brain and regional levels. The magnitude of gyral power spectral density is significantly larger than that of sulcal power spectral density within the range of 0.01 to 0.1 Hz, suggesting that gyri and sulci may play distinct roles as the global hubs and local processing units for functional activity transmission and interaction in macaque brains. In conclusion, our study has established one of the first power spectrum maps of gyro-sulcal functional activity dissociation in macaque brains, providing a novel perspective for systematically exploring the neural mechanism of functional dissociation in mammalian brains.
Page 51 of 2132126 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.