Sort by:
Page 3 of 3423413 results

Deep Learning for Automated Measures of SUV and Molecular Tumor Volume in [<sup>68</sup>Ga]PSMA-11 or [<sup>18</sup>F]DCFPyL, [<sup>18</sup>F]FDG, and [<sup>177</sup>Lu]Lu-PSMA-617 Imaging with Global Threshold Regional Consensus Network.

Jackson P, Buteau JP, McIntosh L, Sun Y, Kashyap R, Casanueva S, Ravi Kumar AS, Sandhu S, Azad AA, Alipour R, Saghebi J, Kong G, Jewell K, Eifer M, Bollampally N, Hofman MS

pubmed logopapersSep 18 2025
Metastatic castration-resistant prostate cancer has a high rate of mortality with a limited number of effective treatments after hormone therapy. Radiopharmaceutical therapy with [<sup>177</sup>Lu]Lu-prostate-specific membrane antigen-617 (LuPSMA) is one treatment option; however, response varies and is partly predicted by PSMA expression and metabolic activity, assessed on [<sup>68</sup>Ga]PSMA-11 or [<sup>18</sup>F]DCFPyL and [<sup>18</sup>F]FDG PET, respectively. Automated methods to measure these on PET imaging have previously yielded modest accuracy. Refining computational workflows and standardizing approaches may improve patient selection and prognostication for LuPSMA therapy. <b>Methods:</b> PET/CT and quantitative SPECT/CT images from an institutional cohort of patients staged for LuPSMA therapy were annotated for total disease burden. In total, 676 [<sup>68</sup>Ga]PSMA-11 or [<sup>18</sup>F]DCFPyL PET, 390 [<sup>18</sup>F]FDG PET, and 477 LuPSMA SPECT images were used for development of automated workflow and tested on 56 cases with externally referred PET/CT staging. A segmentation framework, the Global Threshold Regional Consensus Network, was developed based on nnU-Net, with processing refinements to improve boundary definition and overall label accuracy. <b>Results:</b> Using the model to contour disease extent, the mean volumetric Dice similarity coefficient for [<sup>68</sup>Ga]PSMA-11 or [<sup>18</sup>F]DCFPyL PET was 0.94, for [<sup>18</sup>F]FDG PET was 0.84, and for LuPSMA SPECT was 0.97. On external test cases, Dice accuracy was 0.95 and 0.84 on PSMA and FDG PET, respectively. The refined models yielded consistent improvements compared with nnU-Net, with an increase of 3%-5% in Dice accuracy and 10%-17% in surface agreement. Quantitative biomarkers were compared with a human-defined ground truth using the Pearson coefficient, with scores for [<sup>68</sup>Ga]PSMA-11 or [<sup>18</sup>F]DCFPyL, [<sup>18</sup>F]FDG, and LuPSMA, respectively, of 0.98, 0.94, and 0.99 for disease volume; 0.98, 0.88, and 0.99 for SUV<sub>mean</sub>; 0.96, 0.91, and 0.99 for SUV<sub>max</sub>; and 0.97, 0.96, and 0.99 for volume intensity product. <b>Conclusion:</b> Delineation of disease extent and tracer avidity can be performed with a high degree of accuracy using automated deep learning methods. By incorporating threshold-based postprocessing, the tools can closely match the output of manual workflows. Pretrained models and scripts to adapt to institutional data are provided for open use.

Assessing the Feasibility of Deep Learning-Based Attenuation Correction Using Photon Emission Data in 18F-FDG Images for Dedicated Head and Neck PET Scanners.

Shahrbabaki Mofrad M, Ghafari A, Amiri Tehrani Zade A, Aghahosseini F, Ay M, Farzenefar S, Sheikhzadeh P

pubmed logopapersSep 18 2025
&#xD;This study aimed to evaluate the use of deep learning techniques to produce measured attenuation-corrected (MAC) images from non-attenuation-corrected (NAC) F-FDG PET images, focusing on head and neck imaging.&#xD;Materials and Methods:&#xD;A Residual Network (ResNet) was used to train 2D head and neck PET images from 114 patients (12,068 slices) without pathology or artifacts. For validation during training and testing, 21 and 24 patient images without pathology and artifacts were used, and 12 images with pathologies were used for independent testing. Prediction accuracy was assessed using metrics such as RMSE, SSIM, PSNR, and MSE. The impact of unseen pathologies on the network was evaluated by measuring contrast and SNR in tumoral/hot regions of both reference and predicted images. Statistical significance between the contrast and SNR of reference and predicted images was assessed using a paired-sample t-test.&#xD;Results:&#xD;Two nuclear medicine physicians evaluated the predicted head and neck MAC images, finding them visually similar to reference images. In the normal test group, PSNR, SSIM, RMSE, and MSE were 44.02 ± 1.77, 0.99 ± 0.002, 0.007 ± 0.0019, and 0.000053 ± 0.000030, respectively. For the pathological test group, values were 43.14 ± 2.10, 0.99 ± 0.005, 0.0078 ± 0.0015, and 0.000063 ± 0.000026, respectively. No significant differences were found in SNR and contrast between reference and test images without pathology (p-value>0.05), but significant differences were found in pathological images (p-value <0.05)&#xD;Conclusion:&#xD;The deep learning network demonstrated the ability to directly generate head and neck MAC images that closely resembled the reference images. With additional training data, the model has the potential to be utilized in dedicated head and neck PET scanners without the requirement of computed tomography [CT] for attenuation correction.&#xD.

Optimising Generalisable Deep Learning Models for CT Coronary Segmentation: A Multifactorial Evaluation.

Zhang S, Gharleghi R, Singh S, Shen C, Adikari D, Zhang M, Moses D, Vickers D, Sowmya A, Beier S

pubmed logopapersSep 18 2025
Coronary artery disease (CAD) remains a leading cause of morbidity and mortality worldwide, with incidence rates continuing to rise. Automated coronary artery medical image segmentation can ultimately improve CAD management by enabling more advanced and efficient diagnostic assessments. Deep learning-based segmentation methods have shown significant promise and offered higher accuracy while reducing reliance on manual inputs. However, achieving consistent performance across diverse datasets remains a persistent challenge due to substantial variability in imaging protocols, equipment and patient-specific factors, such as signal intensities, anatomical differences and disease severity. This study investigates the influence of image quality and resolution, governed by vessel size and common disease characteristics that introduce artefacts, such as calcification, on coronary artery segmentation accuracy in computed tomography coronary angiography (CTCA). Two datasets were utilised for model training and validation, including the publicly available ASOCA dataset (40 cases) and a GeoCAD dataset (70 cases) with more cases of coronary disease. Coronary artery segmentations were generated using three deep learning frameworks/architectures: default U-Net, Swin-UNETR, and EfficientNet-LinkNet. The impact of various factors on model generalisation was evaluated, focusing on imaging characteristics (contrast-to-noise ratio, artery contrast enhancement, and edge sharpness) and the extent of calcification at both the coronary tree and individual vessel branch levels. The calcification ranges considered were 0 (no calcification), 1-99 (low), 100-399 (moderate), and > 400 (high). The findings demonstrated that image features, including artery contrast enhancement (r = 0.408, p < 0.001) and edge sharpness (r = 0.239, p = 0.046), were significantly correlated with improved segmentation performance in test cases. Regardless of severity, calcification had a negative impact on segmentation accuracy, with low calcification affecting the segmentation most poorly (p < 0.05). This may be because smaller calcified lesions produce less distinct contrast against the bright lumen, making it harder for the model to accurately identify and segment these lesions. Additionally, in males, a larger diameter of the first obtuse marginal branch (OM1) (p = 0.036) was associated with improved segmentation performance for OM1. Similarly, in females, larger diameters of left main (LM) coronary artery (p = 0.008) and right coronary artery (RCA) (p < 0.001) were associated with better segmentation performance for LM and RCA, respectively. These findings emphasise the importance of accounting for imaging characteristics and anatomical variability when developing generalisable deep learning models for coronary artery segmentation. Unlike previous studies, which broadly acknowledge the role of image quality in segmentation, our work quantitatively demonstrates the extent to which contrast enhancement, edge sharpness, calcification and vessel diameter impact segmentation performance, offering a data-driven foundation for model adaptation strategies. Potential improvements include optimising pre-segmentation imaging (e.g. ensuring adequate edge sharpness in low-contrast regions) and developing algorithms to address vessel-specific challenges, such as improving segmentation of low-level calcifications and accurately identifying LM, RCA and OM1 of smaller diameters.

HybridMamba: A Dual-domain Mamba for 3D Medical Image Segmentation

Weitong Wu, Zhaohu Xing, Jing Gong, Qin Peng, Lei Zhu

arxiv logopreprintSep 18 2025
In the domain of 3D biomedical image segmentation, Mamba exhibits the superior performance for it addresses the limitations in modeling long-range dependencies inherent to CNNs and mitigates the abundant computational overhead associated with Transformer-based frameworks when processing high-resolution medical volumes. However, attaching undue importance to global context modeling may inadvertently compromise critical local structural information, thus leading to boundary ambiguity and regional distortion in segmentation outputs. Therefore, we propose the HybridMamba, an architecture employing dual complementary mechanisms: 1) a feature scanning strategy that progressively integrates representations both axial-traversal and local-adaptive pathways to harmonize the relationship between local and global representations, and 2) a gated module combining spatial-frequency analysis for comprehensive contextual modeling. Besides, we collect a multi-center CT dataset related to lung cancer. Experiments on MRI and CT datasets demonstrate that HybridMamba significantly outperforms the state-of-the-art methods in 3D medical image segmentation.

Integrating artificial intelligence with Gamma Knife radiosurgery in treating meningiomas and schwannomas: a review.

Alhosanie TN, Hammo B, Klaib AF, Alshudifat A

pubmed logopapersSep 18 2025
Meningiomas and schwannomas are benign tumors that affect the central nervous system, comprising up to one-third of intracranial neoplasms. Gamma Knife radiosurgery (GKRS), or stereotactic radiosurgery (SRS), is a form of radiation therapy. Although referred to as "surgery," GKRS does not involve incisions. The GK medical device effectively utilizes highly focused gamma rays to treat lesions or tumors, primarily in the brain. In radiation oncology, machine learning (ML) has been used in various aspects, including outcome prediction, quality control, treatment planning, and image segmentation. This review will showcase the advantages of integrating artificial intelligence with Gamma Knife technology in treating schwannomas and meningiomas.This review adheres to PRISMA guidelines. We searched the PubMed, Scopus, and IEEE databases to identify studies published between 2021 and March 2025 that met our inclusion and exclusion criteria. The focus was on AI algorithms applied to patients with vestibular schwannoma and meningioma treated with GKRS. Two reviewers participated in the data extraction and quality assessment process.A total of nine studies were reviewed in this analysis. One distinguished deep learning (DL) model is a dual-pathway convolutional neural network (CNN) that integrates T1-weighted (T1W) and T2-weighted (T2W) MRI scans. This model was tested on 861 patients who underwent GKRS, achieving a Dice Similarity Coefficient (DSC) of 0.90. ML-based radiomics models have also demonstrated that certain radiomic features can predict the response of vestibular schwannomas and meningiomas to radiosurgery. Among these, the neural network model exhibited the best performance. AI models were also employed to predict complications following GKRS, such as peritumoral edema. A Random Survival Forest (RSF) model was developed using clinical, semantic, and radiomics variables, achieving a C-index score of 0.861 and 0.780. This model enables the classification of patients into high-risk and low-risk categories for developing post-GKRS edema.AI and ML models show great potential in tumor segmentation, volumetric assessment, and predicting treatment outcomes for vestibular schwannomas and meningiomas treated with GKRS. However, their successful clinical implementation relies on overcoming challenges related to external validation, standardization, and computational demands. Future research should focus on large-scale, multi-institutional validation studies, integrating multimodal data, and developing cost-effective strategies for deploying AI technologies.

Guidance for reporting artificial intelligence technology evaluations for ultrasound scanning in regional anaesthesia (GRAITE-USRA): an international multidisciplinary consensus reporting framework.

Zhang X, Ferry J, Hewson DW, Collins GS, Wiles MD, Zhao Y, Martindale APL, Tomaschek M, Bowness JS

pubmed logopapersSep 18 2025
The application of artificial intelligence to enhance the clinical practice of ultrasound-guided regional anaesthesia is of increasing interest to clinicians, researchers and industry. The lack of standardised reporting for studies in this field hinders the comparability, reproducibility and integration of findings. We aimed to develop a consensus-based reporting guideline for research evaluating artificial intelligence applications for ultrasound scanning in regional anaesthesia. We followed methodology recommended by the EQUATOR Network for the development of reporting guidelines. Review of published literature and expert consultation generated a preliminary list of candidate reporting items. An international, multidisciplinary, modified Delphi process was then undertaken, involving experts from clinical practice, academia and industry. Two rounds of expert consultation were conducted, in which participants evaluated each item for inclusion in a final reporting guideline, followed by an online discussion. A total of 67 experts participated in the first Delphi round, 63 in the second round and 25 in the roundtable consensus meeting. The GRAITE-USRA reporting guideline comprises 40 items addressing key aspects of reporting in artificial intelligence research for ultrasound scanning in regional anaesthesia. Specific items include ultrasound acquisition protocols and operator expertise, which are not covered in existing artificial intelligence reporting guidelines. The GRAITE-USRA reporting guideline provides a minimum set of recommendations for artificial intelligence-related research for ultrasound scanning in regional anaesthesia. Its adoption will promote consistent reporting standards, enhance transparency, improve study reproducibility and ultimately support the effective integration of evidence into clinical practice.

Rapid and robust quantitative cartilage assessment for the clinical setting: deep learning-enhanced accelerated T2 mapping.

Carretero-Gómez L, Wiesinger F, Fung M, Nunes B, Pedoia V, Majumdar S, Desai AD, Gatti A, Chaudhari A, Sánchez-Lacalle E, Malpica N, Padrón M

pubmed logopapersSep 18 2025
Clinical adoption of T2 mapping is limited by poor reproducibility, lengthy examination times, and cumbersome image analysis. This study aimed to develop an accelerated deep learning (DL)-enhanced cartilage T2 mapping sequence (DL CartiGram), demonstrate its repeatability and reproducibility, and evaluate its accuracy compared to conventional T2 mapping using a semi-automatic pipeline. DL CartiGram was implemented using a modified 2D Multi-Echo Spin-Echo sequence at 3 T, incorporating parallel imaging and DL-based image reconstruction. Phantom tests were performed at two sites to obtain test-retest T2 maps, using single-echo spin-echo (SE) measurements as reference values. At one site, DL CartiGram and conventional T2 mapping were performed on 43 patients. T2 values were extracted from 52 patellar and femoral compartments using DL knee segmentation and the DOSMA framework. Repeatability and reproducibility were assessed using coefficients of variation (CV), Bland-Altman analysis, and concordance correlation coefficients (CCC). T2 differences were evaluated with Wilcoxon signed-rank tests, paired t tests, and accuracy CV. Phantom tests showed intra-site repeatability with CVs ≤ 2.52% and T2 precision ≤ 1 ms. Inter-site reproducibility showed a CV of 2.74% and a CCC of 99% (CI 92-100%). Bland-Altman analysis showed a bias of 1.56 ms between sites (p = 0.03), likely due to temperature effects. In vivo, DL CartiGram reduced scan time by 40%, yielding accurate cartilage T2 measurements (CV = 0.97%) with no significant differences compared to conventional T2 mapping (p = 0.1). DL CartiGram significantly accelerates T2 mapping, while still assuring excellent repeatability and reproducibility. Combined with the semi-automatic post-processing pipeline, it emerges as a promising tool for quantitative T2 cartilage biomarker assessment in clinical settings.

Brain-HGCN: A Hyperbolic Graph Convolutional Network for Brain Functional Network Analysis

Junhao Jia, Yunyou Liu, Cheng Yang, Yifei Sun, Feiwei Qin, Changmiao Wang, Yong Peng

arxiv logopreprintSep 18 2025
Functional magnetic resonance imaging (fMRI) provides a powerful non-invasive window into the brain's functional organization by generating complex functional networks, typically modeled as graphs. These brain networks exhibit a hierarchical topology that is crucial for cognitive processing. However, due to inherent spatial constraints, standard Euclidean GNNs struggle to represent these hierarchical structures without high distortion, limiting their clinical performance. To address this limitation, we propose Brain-HGCN, a geometric deep learning framework based on hyperbolic geometry, which leverages the intrinsic property of negatively curved space to model the brain's network hierarchy with high fidelity. Grounded in the Lorentz model, our model employs a novel hyperbolic graph attention layer with a signed aggregation mechanism to distinctly process excitatory and inhibitory connections, ultimately learning robust graph-level representations via a geometrically sound Fr\'echet mean for graph readout. Experiments on two large-scale fMRI datasets for psychiatric disorder classification demonstrate that our approach significantly outperforms a wide range of state-of-the-art Euclidean baselines. This work pioneers a new geometric deep learning paradigm for fMRI analysis, highlighting the immense potential of hyperbolic GNNs in the field of computational psychiatry.

Optimized deep learning-accelerated single-breath-hold abdominal HASTE with and without fat saturation improves and accelerates abdominal imaging at 3 Tesla.

Tan Q, Kubicka F, Nickel D, Weiland E, Hamm B, Geisel D, Wagner M, Walter-Rittel TC

pubmed logopapersSep 18 2025
Deep learning-accelerated single-shot turbo-spin-echo techniques (DL-HASTE) enable single-breath-hold T2-weighted abdominal imaging. However, studies evaluating the image quality of DL-HASTE with and without fat saturation (FS) remain limited. This study aimed to prospectively evaluate the technical feasibility and image quality of abdominal DL-HASTE with and without FS at 3 Tesla. DL-HASTE of the upper abdomen was acquired with variable sequence parameters regarding FS, flip angle (FA) and field of view (FOV) in 10 healthy volunteers and 50 patients. DL-HASTE sequences were compared to clinical sequences (HASTE, HASTE-FS and T2-TSE-FS BLADE). Two radiologists independently assessed the sequences regarding scores of overall image quality, delineation of abdominal organs, artifacts and fat saturation using a Likert scale (range: 1-5). Breath-hold time of DL-HASTE and DL-HASTE-FS was 21 ± 2 s with fixed FA and 20 ± 2 s with variable FA (p < 0.001), with no overall image quality difference (p > 0.05). DL-HASTE required a 10% larger FOV than DL-HASTE-FS to avoid aliasing artifacts from subcutaneous fat. Both DL-HASTE and DL-HASTE-FS had significantly higher overall image quality scores than standard HASTE acquisitions (DL-HASTE vs. HASTE: 4.8 ± 0.40 vs. 4.1 ± 0.50; DL-HASTE-FS vs. HASTE-FS: 4.6 ± 0.50 vs. 3.6 ± 0.60; p < 0.001). Compared to the T2-TSE-FS BLADE, DL-HASTE-FS provided higher overall image quality (4.6 ± 0.50 vs. 4.3 ± 0.63, p = 0.011). DL-HASTE achieved significant higher image quality (p = 0.006) and higher sharpness score of organs compared to DL-HASTE-FS (p < 0.001). Deep learning-accelerated HASTE with and without fat saturation were both feasible at 3 Tesla and showed improved image quality compared to conventional sequences. Not applicable.

Fusion of X-Ray Images and Clinical Data for a Multimodal Deep Learning Prediction Model of Osteoporosis: Algorithm Development and Validation Study.

Tang J, Yin X, Lai J, Luo K, Wu D

pubmed logopapersSep 18 2025
Osteoporosis is a bone disease characterized by reduced bone mineral density and mass, which increase the risk of fragility fractures in patients. Artificial intelligence can mine imaging features specific to different bone densities, shapes, and structures and fuse other multimodal features for synergistic diagnosis to improve prediction accuracy. This study aims to develop a multimodal model that fuses chest X-rays and clinical parameters for opportunistic screening of osteoporosis and to compare and analyze the experimental results with existing methods. We used multimodal data, including chest X-ray images and clinical data, from a total of 1780 patients at Chongqing Daping Hospital from January 2019 to August 2024. We adopted a probability fusion strategy to construct a multimodal model. In our model, we used a convolutional neural network as the backbone network for image processing and fine-tuned it using a transfer learning technique to suit the specific task of this study. In addition, we introduced a gradient-based wavelet feature extraction method. We combined it with an attention mechanism to assist in feature fusion, which enhanced the model's focus on key regions of the image and further improved its ability to extract image features. The multimodal model proposed in this paper outperforms the traditional methods in the 4 evaluation metrics of area under the curve value, accuracy, sensitivity, and specificity. Compared with using only the X-ray image model, the multimodal model improved the area under the curve value significantly from 0.951 to 0.975 (P=.004), the accuracy from 89.32% to 92.36% (P=.045), the sensitivity from 89.82% to 91.23% (P=.03), and the specificity from 88.64% to 93.92% (P=.008). While the multimodal model that fuses chest X-ray images and clinical data demonstrated superior performance compared to unimodal models and traditional methods, this study has several limitations. The dataset size may not be sufficient to capture the full diversity of the population. The retrospective nature of the study may introduce selection bias, and the lack of external validation limits the generalizability of the findings. Future studies should address these limitations by incorporating larger, more diverse datasets and conducting rigorous external validation to further establish the model's clinical use.
Page 3 of 3423413 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.