Sort by:
Page 105 of 3993982 results

Comparison of diagnostic performance between manual diagnosis following PROMISE V2 and aPROMISE utilizing Ga/F-PSMA PET/CT.

Enei Y, Yanagisawa T, Okada A, Kuruma H, Okazaki C, Watanabe K, Lenzo NP, Kimura T, Miki K

pubmed logopapersJul 15 2025
Automated PROMISE (aPROMISE), which is an artificial intelligence-supported software for prostate-specific membrane antigen (PSMA) PET/CT based on PROMISE V2, has demonstrated diagnostic utility with better correspondence rates compared to manual diagnosis. However, previous studies have consistently utilized <sup>18</sup>F-PSMA PET/CT. Therefore, we investigated the diagnostic utility of aPROMISE using both <sup>18</sup>F- and <sup>68</sup> Ga-PSMA PET/CT of Japanese patients with metastatic prostate cancer (mPCa). We retrospectively evaluated 21 PSMA PET/CT images (<sup>68</sup> Ga-PSMA PET/CT: n = 12, <sup>18</sup>F-PSMA PET/CT: n = 9) from 21 patients with mPCa. A single, well-experienced nuclear radiologist performed manual diagnosis following PROMISE V2 and subsequently performed aPROMISE-assisted diagnosis to assess miTNM and details of metastatic sites. We compared the diagnostic time and correspondence rates of miTNM diagnosis between manual and aPROMISE-assisted diagnoses. Additionally, we investigated the differences in diagnostic performance between the two radioisotopes. aPROMISE-assisted diagnosis was significantly associated with shorter median diagnostic time compared to manual diagnosis (427 s [IQR: 370-834] vs. 1,114 s [IQR: 922-1291], p < 0.001). The time reduction with aPROMISE-assisted diagnosis was particularly notable when using <sup>68</sup> Ga-PSMA PET/CT. aPROMISE had high diagnostic accuracy with 100% sensitivity for miT, M1a, and M1b stages. Notably, for M1b stages, aPROMISE achieved 100% sensitivity and specificity, regardless of the type of radioisotope used. However, aPROMISE was misinterpreted in lymph node detection in some cases and missed five visceral metastases (2 adrenal and 3 liver), resulting in lower sensitivity for miM1c stage (63%). In addition to detecting metastatic sites, aPROMISE successfully provided detailed metrics, including the number of metastatic lesions, total metastatic volume, and SUV mean. Despite the preliminary nature of the study, aPROMISE-assisted diagnosis significantly reduces diagnostic time and achieves satisfactory accuracy compared to manual diagnosis. While aPROMISE is effective in detecting bone metastases, its limitations in identifying lymph node and visceral metastases must be carefully addressed. This study supports the utility of aPROMISE in Japanese patients with mPCa and underscores the need for further validation in larger cohorts.

An interpretable machine learning model for predicting bone marrow invasion in patients with lymphoma via <sup>18</sup>F-FDG PET/CT: a multicenter study.

Zhu X, Lu D, Wu Y, Lu Y, He L, Deng Y, Mu X, Fu W

pubmed logopapersJul 15 2025
Accurate identification of bone marrow invasion (BMI) is critical for determining the prognosis of and treatment strategies for lymphoma. Although bone marrow biopsy (BMB) is the current gold standard, its invasive nature and sampling errors highlight the necessity for noninvasive alternatives. We aimed to develop and validate an interpretable machine learning model that integrates clinical data, <sup>18</sup>F-fluorodeoxyglucose positron emission tomography/computed tomography (<sup>18</sup>F-FDG PET/CT) parameters, radiomic features, and deep learning features to predict BMI in lymphoma patients. We included 159 newly diagnosed lymphoma patients (118 from Center I and 41 from Center II), excluding those with prior treatments, incomplete data, or under 18 years of age. Data from Center I were randomly allocated to training (n = 94) and internal test (n = 24) sets; Center II served as an external validation set (n = 41). Clinical parameters, PET/CT features, radiomic characteristics, and deep learning features were comprehensively analyzed and integrated into machine learning models. Model interpretability was elucidated via Shapley Additive exPlanations (SHAPs). Additionally, a comparative diagnostic study evaluated reader performance with and without model assistance. BMI was confirmed in 70 (44%) patients. The key clinical predictors included B symptoms and platelet count. Among the tested models, the ExtraTrees classifier achieved the best performance. For external validation, the combined model (clinical + PET/CT + radiomics + deep learning) achieved an area under the receiver operating characteristic curve (AUC) of 0.886, outperforming models that use only clinical (AUC 0.798), radiomic (AUC 0.708), or deep learning features (AUC 0.662). SHAP analysis revealed that PET radiomic features (especially PET_lbp_3D_m1_glcm_DependenceEntropy), platelet count, and B symptoms were significant predictors of BMI. Model assistance significantly enhanced junior reader performance (AUC improved from 0.663 to 0.818, p = 0.03) and improved senior reader accuracy, although not significantly (AUC 0.768 to 0.867, p = 0.10). Our interpretable machine learning model, which integrates clinical, imaging, radiomic, and deep learning features, demonstrated robust BMI prediction performance and notably enhanced physician diagnostic accuracy. These findings underscore the clinical potential of interpretable AI to complement medical expertise and potentially reduce the reliance on invasive BMB for lymphoma staging.

Evaluation of Artificial Intelligence-based diagnosis for facial fractures, advantages compared with conventional imaging diagnosis: a systematic review and meta-analysis.

Ju J, Qu Z, Qing H, Ding Y, Peng L

pubmed logopapersJul 15 2025
Currently, the application of convolutional neural networks (CNNs) in artificial intelligence (AI) for medical imaging diagnosis has emerged as a highly promising tool. In particular, AI-assisted diagnosis holds significant potential for orthopedic and emergency department physicians by improving diagnostic efficiency and enhancing the overall patient experience. This systematic review and meta-analysis has the objective of assessing the application of AI in diagnosing facial fractures and evaluating its diagnostic performance. This study adhered to the guidelines of the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) and PRISMA-Diagnostic Test Accuracy (PRISMA-DTA). A comprehensive literature search was conducted in the PubMed, Cochrane Library, and Web of Science databases to identify original articles published up to December 2024. The risk of bias and applicability of the included studies were assessed using the QUADAS-2 tool. The results were analyzed using a Summary Receiver Operating Characteristic (SROC) curve. A total of 16 studies were included in the analysis, with contingency tables extracted from 11 of them. The pooled sensitivity was 0.889 (95% CI: 0.844-0.922), and the pooled specificity was 0.888 (95% CI: 0.834-0.926). The area under the Summary Receiver Operating Characteristic (SROC) curve was 0.911. In the subgroup analysis of nasal and mandibular fractures, the pooled sensitivity for nasal fractures was 0.851 (95% CI: 0.806-0.887), and the pooled specificity was 0.883 (95% CI: 0.862-0.902). For mandibular fractures, the pooled sensitivity was 0.905 (95% CI: 0.836-0.947), and the pooled specificity was 0.895 (95% CI: 0.824-0.940). AI can be developed as an auxiliary tool to assist clinicians in diagnosing facial fractures. The results demonstrate high overall sensitivity and specificity, along with a robust performance reflected by the high area under the SROC curve. This study has been prospectively registered on Prospero, ID:CRD42024618650, Creat Date:10 Dec 2024. https://www.crd.york.ac.uk/PROSPERO/view/CRD42024618650 .

Ultrafast T2-weighted MR imaging of the urinary bladder using deep learning-accelerated HASTE at 3 Tesla.

Yan L, Tan Q, Kohnert D, Nickel MD, Weiland E, Kubicka F, Jahnke P, Geisel D, Wagner M, Walter-Rittel T

pubmed logopapersJul 15 2025
This prospective study aimed to assess the feasibility of a half-Fourier single-shot turbo spin echo sequence (HASTE) with deep learning (DL) reconstruction for ultrafast imaging of the bladder with reduced susceptibility to motion artifacts. 50 patients underwent pelvic T2w imaging at 3 Tesla using the following MR sequences in sagittal orientation without antiperistaltic premedication: T2-TSE (time of acquisition [TA]: 2.03-4.00 min), standard HASTE (TA: 0.65-1.10 min), and DL-HASTE (TA: 0.25-0.47 min), with a slice thickness of 3 mm and a varying number of slices (25-45). Three radiologists evaluated the image quality of the three sequences quantitatively and qualitatively. Overall image quality of DL-HASTE (average score: 5) was superior to HASTE and T2-TSE (p < .001). DL-HASTE provided the clearest bladder wall delineation, especially in the apical part of the bladder (p < .001). SNR (36.3 ± 6.3) and CNR (50.3 ± 19.7) were the highest on DL-HASTE, followed by T2-TSE (33.1 ± 6.3 and 44.3 ± 21.0, respectively; p < .05) and HASTE (21.7 ± 5.4 and 35.8 ± 17.5, respectively; p < .01). A limitation of DL-HASTE and HASTE was the susceptibility to urine flow artifact within the bladder, which was absent or only minimal on T2-TSE. Diagnostic confidence in assessment of the bladder was highest with the combination of DL-HASTE and T2-TSE (p < .05). DL-HASTE allows for ultrafast imaging of the bladder with high image quality and is a promising addition to T2-TSE.

<sup>18</sup>F-FDG PET-based liver segmentation using deep-learning.

Kaneko Y, Miwa K, Yamao T, Miyaji N, Nishii R, Yamazaki K, Nishikawa N, Yusa M, Higashi T

pubmed logopapersJul 15 2025
Organ segmentation using <sup>18</sup>F-FDG PET images alone has not been extensively explored. Segmentation based methods based on deep learning (DL) have traditionally relied on CT or MRI images, which are vulnerable to alignment issues and artifacts. This study aimed to develop a DL approach for segmenting the entire liver based solely on <sup>18</sup>F-FDG PET images. We analyzed data from 120 patients who were assessed using <sup>18</sup>F-FDG PET. A three-dimensional (3D) U-Net model from nnUNet and preprocessed PET images served as DL and input images, respectively, for the model. The model was trained with 5-fold cross-validation on data from 100 patients, and segmentation accuracy was evaluated on an independent test set of 20 patients. Accuracy was assessed using Intersection over Union (IoU), Dice coefficient, and liver volume. Image quality was evaluated using mean (SUVmean) and maximum (SUVmax) standardized uptake value and signal-to-noise ratio (SNR). The model achieved an average IoU of 0.89 and an average Dice coefficient of 0.94 based on test data from 20 patients, indicating high segmentation accuracy. No significant discrepancies in image quality metrics were identified compared with ground truth. Liver regions were accurately extracted from <sup>18</sup>F-FDG PET images which allowed rapid and stable evaluation of liver uptake in individual patients without the need for CT or MRI assessments.

An efficient deep learning based approach for automated identification of cervical vertebrae fracture as a clinical support aid.

Singh M, Tripathi U, Patel KK, Mohit K, Pathak S

pubmed logopapersJul 15 2025
Cervical vertebrae fractures pose a significant risk to a patient's health. The accurate diagnosis and prompt treatment need to be provided for effective treatment. Moreover, the automated analysis of the cervical vertebrae fracture is of utmost important, as deep learning models have been widely used and play significant role in identification and classification. In this paper, we propose a novel hybrid transfer learning approach for the identification and classification of fractures in axial CT scan slices of the cervical spine. We utilize the publicly available RSNA (Radiological Society of North America) dataset of annotated cervical vertebrae fractures for our experiments. The CT scan slices undergo preprocessing and analysis to extract features, employing four distinct pre-trained transfer learning models to detect abnormalities in the cervical vertebrae. The top-performing model, Inception-ResNet-v2, is combined with the upsampling component of U-Net to form a hybrid architecture. The hybrid model demonstrates superior performance over traditional deep learning models, achieving an overall accuracy of 98.44% on 2,984 test CT scan slices, which represents a 3.62% improvement over the 95% accuracy of predictions made by radiologists. This study advances clinical decision support systems, equipping medical professionals with a powerful tool for timely intervention and accurate diagnosis of cervical vertebrae fractures, thereby enhancing patient outcomes and healthcare efficiency.

Learning quality-guided multi-layer features for classifying visual types with ball sports application.

Huang X, Liu T, Yu Y

pubmed logopapersJul 15 2025
Nowadays, breast cancer is one of the leading causes of death among women. This highlights the need for precise X-ray image analysis in the medical and imaging fields. In this study, we present an advanced perceptual deep learning framework that extracts key features from large X-ray datasets, mimicking human visual perception. We begin by using a large dataset of breast cancer images and apply the BING objectness measure to identify relevant visual and semantic patches. To manage the large number of object-aware patches, we propose a new ranking technique in the weak annotation context. This technique identifies the patches that are most aligned with human visual judgment. These key patches are then aggregated to extract meaningful features from each image. We leverage these features to train a multi-class SVM classifier, which categorizes the images into various breast cancer stages. The effectiveness of our deep learning model is demonstrated through extensive comparative analysis and visual examples.

Performance of a screening-trained DL model for pulmonary nodule malignancy estimation of incidental clinical nodules.

Dinnessen R, Peeters D, Antonissen N, Mohamed Hoesein FAA, Gietema HA, Scholten ET, Schaefer-Prokop C, Jacobs C

pubmed logopapersJul 15 2025
To test the performance of a DL model developed and validated for screen-detected pulmonary nodules on incidental nodules detected in a clinical setting. A retrospective dataset of incidental pulmonary nodules sized 5-15 mm was collected, and a subset of size-matched solid nodules was selected. The performance of the DL model was compared to the Brock model. AUCs with 95% CIs were compared using the DeLong method. Sensitivity and specificity were determined at various thresholds, using a 10% threshold for the Brock model as reference. The model's calibration was visually assessed. The dataset included 49 malignant and 359 benign solid or part-solid nodules, and the size-matched dataset included 47 malignant and 47 benign solid nodules. In the complete dataset, AUCs [95% CI] were 0.89 [0.85, 0.93] for the DL model and 0.86 [0.81, 0.92] for the Brock model (p = 0.27). In the size-matched subset, AUCs of the DL and Brock models were 0.78 [0.69, 0.88] and 0.58 [0.46, 0.69] (p < 0.01), respectively. At a 10% threshold, the Brock model had a sensitivity of 0.49 [0.35, 0.63] and a specificity of 0.92 [0.89, 0.94]. At a threshold of 17%, the DL model matched the specificity of the Brock model at the 10% threshold, but had a higher sensitivity (0.57 [0.43, 0.71]). Calibration analysis revealed that the DL model overestimated the malignancy probability. The DL model demonstrated good discriminatory performance in a dataset of incidental nodules and outperformed the Brock model, but may need recalibration for clinical practice. Question What is the performance of a DL model for pulmonary nodule malignancy risk estimation developed on screening data in a dataset of incidentally detected nodules? Findings The DL model performed well on a dataset of nodules from clinical routine care and outperformed the Brock model in a size-matched subset. Clinical relevance This study provides further evidence about the potential of DL models for risk stratification of incidental nodules, which may improve nodule management in routine clinical practice.

Advanced finite segmentation model with hybrid classifier learning for high-precision brain tumor delineation in PET imaging.

Murugan K, Palanisamy S, Sathishkumar N, Alshalali TAN

pubmed logopapersJul 15 2025
Brain tumor segmentation plays a crucial role in clinical diagnostics and treatment planning, yet accurate and efficient segmentation remains a significant challenge due to complex tumor structures and variations in imaging modalities. Multi-feature selection and region classification depend on continuous homogeneous features to improve the precision of tumor detection. This classification is required to suppress the discreteness across various extraction rates to consent to the smallest segmentation region that is infected. This study proposes a Finite Segmentation Model (FSM) with Improved Classifier Learning (ICL) to enhance segmentation accuracy in Positron Emission Tomography (PET) images. The FSM-ICL framework integrates advanced textural feature extraction, deep learning-based classification, and an adaptive segmentation approach to differentiate between tumor and non-tumor regions with high precision. Our model is trained and validated on the Synthetic Whole-Head Brain Tumor Segmentation Dataset, consisting of 1000 training and 426 testing images, achieving a segmentation accuracy of 92.57%, significantly outperforming existing approaches such as NRAN (62.16%), DSSE-V-Net (71.47%), and DenseUNet+ (83.93%). Furthermore, FSM-ICL enhances classification precision to 95.59%, reduces classification error to 5.67%, and minimizes classification time to 572.39 ms, demonstrating a 10.09% improvement in precision and a 10.96% boost in classification rates over state-of-the-art methods. The hybrid classifier learning approach effectively addresses segmentation discreteness, ensuring continuous and discrete tumor region detection with superior feature differentiation. This work has significant implications for automated tumor detection, personalized treatment strategies, and AI-driven medical imaging advancements. Future directions include incorporating micro-segmentation and pre-classification techniques to further optimize performance in dense pixel-packed datasets.

A diffusion model for universal medical image enhancement.

Fei B, Li Y, Yang W, Gao H, Xu J, Ma L, Yang Y, Zhou P

pubmed logopapersJul 15 2025
The development of medical imaging techniques has made a significant contribution to clinical decision-making. However, the existence of suboptimal imaging quality, as indicated by irregular illumination or imbalanced intensity, presents significant obstacles in automating disease screening, analysis, and diagnosis. Existing approaches for natural image enhancement are mostly trained with numerous paired images, presenting challenges in data collection and training costs, all while lacking the ability to generalize effectively. Here, we introduce a pioneering training-free Diffusion Model for Universal Medical Image Enhancement, named UniMIE. UniMIE demonstrates its unsupervised enhancement capabilities across various medical image modalities without the need for any fine-tuning. It accomplishes this by relying solely on a single pre-trained model from ImageNet. We conduct a comprehensive evaluation on 13 imaging modalities and over 15 medical types, demonstrating better qualities, robustness, and accuracy than other modality-specific and data-inefficient models. By delivering high-quality enhancement and corresponding accuracy downstream tasks across a wide range of tasks, UniMIE exhibits considerable potential to accelerate the advancement of diagnostic tools and customized treatment plans. UniMIE represents a transformative approach to medical image enhancement, offering a versatile and robust solution that adapts to diverse imaging conditions. By improving image quality and facilitating better downstream analyses, UniMIE has the potential to revolutionize clinical workflows and enhance diagnostic accuracy across a wide range of medical applications.
Page 105 of 3993982 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.