Sort by:
Page 36 of 3023020 results

The dosimetric impacts of ct-based deep learning autocontouring algorithm for prostate cancer radiotherapy planning dosimetric accuracy of DirectORGANS.

Dinç SÇ, Üçgül AN, Bora H, Şentürk E

pubmed logopapersAug 2 2025
In study, we aimed to dosimetrically evaluate the usability of a new generation autocontouring algorithm (DirectORGANS) that automatically identifies organs and contours them directly in the computed tomography (CT) simulator before creating prostate radiotherapy plans. The CT images of 10 patients were used in this study. The prostates, bladder, rectum, and femoral heads of 10 patients were automatically contoured based on DirectORGANS algorithm at the CT simulator. On the same CT image sets, the same target volumes and contours of organs at risk were manually contoured by an experienced physician using MRI images and used as a reference structure. The doses of manually delineated contours of the target volume and organs at risk and the doses of auto contours of the target volume and organs at risk were obtained from the dose volume histogram of the same plan. Conformity index (CI) and homogeneity index (HI) were calculated to evaluate the target volumes. In critical organ structures, V<sub>60,</sub> V<sub>65,</sub> V<sub>70</sub> for the rectum, V<sub>65,</sub> V70, V75, and V<sub>80</sub> for the bladder, and maximum doses for femoral heads were evaluated. The Mann-Whitney U test was used for statistical comparison with statistical package SPSS (P < 0.05). Compared to the doses of the manual contours (MC) with auto contours (AC), there was no significant difference between the doses of the organs at risk. However, there were statistically significant differences between HI and CI values due to differences in prostate contouring (P < 0.05). The study showed that the need for clinicians to edit target volumes using MRI before treatment planning. However, it demonstrated that delineating organs at risk was used safely without the need for correction. DirectORGANS algorithm is suitable for use in RT planning to minimize differences between physicians and shorten the duration of this contouring step.

Transfer learning based deep architecture for lung cancer classification using CT image with pattern and entropy based feature set.

R N, C M V

pubmed logopapersAug 2 2025
Early detection of lung cancer, which remains one of the leading causes of death worldwide, is important for improved prognosis, and CT scanning is an important diagnostic modality. Lung cancer classification according to CT scan is challenging since the disease is characterized by very variable features. A hybrid deep architecture, ILN-TL-DM, is presented in this paper for precise classification of lung cancer from CT scan images. Initially, an Adaptive Gaussian filtering method is applied during pre-processing to eliminate noise and enhance the quality of the CT image. This is followed by an Improved Attention-based ResU-Net (P-ResU-Net) model being utilized during the segmentation process to accurately isolate the lung and tumor areas from the remaining image. During the process of feature extraction, various features are derived from the segmented images, such as Local Gabor Transitional Pattern (LGTrP), Pyramid of Histograms of Oriented Gradients (PHOG), deep features and improved entropy-based features, all intended to improve the representation of the tumor areas. Finally, classification exploits a hybrid deep learning architecture integrating an improved LeNet structure with Transfer Learning (ILN-TL) and a DeepMaxout (DM) structure. Both model outputs are finally merged with the help of a soft voting strategy, which results in the final classification result that separates cancerous and non-cancerous tissues. The strategy greatly enhances lung cancer detection's accuracy and strength, showcasing how combining sophisticated neural network structures with feature engineering and ensemble methods could be used to achieve better medical image classification. The ILN-TL-DM model consistently outperforms the conventional methods with greater accuracy (0.962), specificity (0.955) and NPV (0.964).

AI enhanced diagnostic accuracy and workload reduction in hepatocellular carcinoma screening.

Lu RF, She CY, He DN, Cheng MQ, Wang Y, Huang H, Lin YD, Lv JY, Qin S, Liu ZZ, Lu ZR, Ke WP, Li CQ, Xiao H, Xu ZF, Liu GJ, Yang H, Ren J, Wang HB, Lu MD, Huang QH, Chen LD, Wang W, Kuang M

pubmed logopapersAug 2 2025
Hepatocellular carcinoma (HCC) ultrasound screening encounters challenges related to accuracy and the workload of radiologists. This retrospective, multicenter study assessed four artificial intelligence (AI) enhanced strategies using 21,934 liver ultrasound images from 11,960 patients to improve HCC ultrasound screening accuracy and reduce radiologist workload. UniMatch was used for lesion detection and LivNet for classification, trained on 17,913 images. Among the strategies tested, Strategy 4, which combined AI for initial detection and radiologist evaluation of negative cases in both detection and classification phases, outperformed others. It not only matched the high sensitivity of original algorithm (0.956 vs. 0.991) but also improved specificity (0.787 vs. 0.698), reduced radiologist workload by 54.5%, and decreased both recall and false positive rates. This approach demonstrates a successful model of human-AI collaboration, not only enhancing clinical outcomes but also mitigating unnecessary patient anxiety and system burden by minimizing recalls and false positives.

Integrating Time and Frequency Domain Features of fMRI Time Series for Alzheimer's Disease Classification Using Graph Neural Networks.

Peng W, Li C, Ma Y, Dai W, Fu D, Liu L, Liu L, Yu N, Liu J

pubmed logopapersAug 2 2025
Accurate and early diagnosis of Alzheimer's Disease (AD) is crucial for timely interventions and treatment advancement. Functional Magnetic Resonance Imaging (fMRI), measuring brain blood-oxygen level changes over time, is a powerful AD-diagnosis tool. However, current fMRI-based AD diagnosis methods rely on noise-susceptible time-domain features and focus only on synchronous brain-region interactions in the same time phase, neglecting asynchronous ones. To overcome these issues, we propose Frequency-Time Fusion Graph Neural Network (FTF-GNN). It integrates frequency- and time-domain features for robust AD classification, considering both asynchronous and synchronous brain-region interactions. First, we construct a fully connected hypervariate graph, where nodes represent brain regions and their Blood Oxygen Level-Dependent (BOLD) values at a time series point. A Discrete Fourier Transform (DFT) transforms these BOLD values from the spatial to the frequency domain for frequency-component analysis. Second, a Fourier-based Graph Neural Network (FourierGNN) processes the frequency features to capture asynchronous brain region connectivity patterns. Third, these features are converted back to the time domain and reshaped into a matrix where rows represent brain regions and columns represent their frequency-domain features at each time point. Each brain region then fuses its frequency-domain features with position encoding along the time series, preserving temporal and spatial information. Next, we build a brain-region network based on synchronous BOLD value associations and input the brain-region network and the fused features into a Graph Convolutional Network (GCN) to capture synchronous brain region connectivity patterns. Finally, a fully connected network classifies the brain-region features. Experiments on the Alzheimer's Disease Neuroimaging Initiative (ADNI) dataset demonstrate the method's effectiveness: Our model achieves 91.26% accuracy and 96.79% AUC in AD versus Normal Control (NC) classification, showing promising performance. For early-stage detection, it attains state-of-the-art performance in distinguishing NC from Late Mild Cognitive Impairment (LMCI) with 87.16% accuracy and 93.22% AUC. Notably, in the challenging task of differentiating LMCI from AD, FTF-GNN achieves optimal performance (85.30% accuracy, 94.56% AUC), while also delivering competitive results (77.40% accuracy, 91.17% AUC) in distinguishing Early MCI (EMCI) from LMCI-the most clinically complex subtype classification. These results indicate that leveraging complementary frequency- and time-domain information, along with considering asynchronous and synchronous brain-region interactions, can address existing approach limitations, offering a robust neuroimaging-based diagnostic solution.

Deep learning-driven incidental detection of vertebral fractures in cancer patients: advancing diagnostic precision and clinical management.

Mniai EM, Laletin V, Tselikas L, Assi T, Bonnet B, Camez AO, Zemmouri A, Muller S, Moussa T, Chaibi Y, Kiewsky J, Quenet S, Avare C, Lassau N, Balleyguier C, Ayobi A, Ammari S

pubmed logopapersAug 2 2025
Vertebral compression fractures (VCFs) are the most prevalent skeletal manifestations of osteoporosis in cancer patients. Yet, they are frequently missed or not reported in routine clinical radiology, adversely impacting patient outcomes and quality of life. This study evaluates the diagnostic performance of a deep-learning (DL)-based application and its potential to reduce the miss rate of incidental VCFs in a high-risk cancer population. We retrospectively analysed thoraco-abdomino-pelvic (TAP) CT scans from 1556 patients with stage IV cancer collected consecutively over a 4-month period (September-December 2023) in a tertiary cancer center. A DL-based application flagged cases positive for VCFs, which were subsequently reviewed by two expert radiologists for validation. Additionally, grade 3 fractures identified by the application were independently assessed by two expert interventional radiologists to determine their eligibility for vertebroplasty. Of the 1556 cases, 501 were flagged as positive for VCF by the application, with 436 confirmed as true positives by expert review, yielding a positive predictive value (PPV) of 87%. Common causes of false positives included sclerotic vertebral metastases, scoliosis, and vertebrae misidentification. Notably, 83.5% (364/436) of true positive VCFs were absent from radiology reports, indicating a substantial non-report rate in routine practice. Ten grade 3 fractures were overlooked or not reported by radiologists. Among them, 9 were deemed suitable for vertebroplasty by expert interventional radiologists. This study underscores the potential of DL-based applications to improve the detection of VCFs. The analyzed tool can assist radiologists in detecting more incidental vertebral fractures in adult cancer patients, optimising timely treatment and reducing associated morbidity and economic burden. Moreover, it might enhance patient access to interventional treatments such as vertebroplasty. These findings highlight the transformative role that DL can play in optimising clinical management and outcomes for osteoporosis-related VCFs in cancer patients.

Advances in renal cancer: diagnosis, treatment, and emerging technologies.

Saida T, Iima M, Ito R, Ueda D, Nishioka K, Kurokawa R, Kawamura M, Hirata K, Honda M, Takumi K, Ide S, Sugawara S, Watabe T, Sakata A, Yanagawa M, Sofue K, Oda S, Naganawa S

pubmed logopapersAug 2 2025
This review provides a comprehensive overview of current practices and recent advancements in the diagnosis and treatment of renal cancer. It introduces updates in histological classification and explains the imaging characteristics of each tumour based on these changes. The review highlights state-of-the-art imaging modalities, including magnetic resonance imaging, computed tomography, positron emission tomography, and ultrasound, emphasising their crucial role in tumour characterisation and optimising treatment planning. Emerging technologies, such as radiomics and artificial intelligence, are also discussed for their transformative impact on enhancing diagnostic precision, prognostic prediction, and personalised patient management. Furthermore, the review explores current treatment options, including minimally invasive techniques such as cryoablation, radiofrequency ablation, and stereotactic body radiation therapy, as well as systemic therapies such as immune checkpoint inhibitors and targeted therapies.

Temporal consistency-aware network for renal artery segmentation in X-ray angiography.

Yang B, Li C, Fezzi S, Fan Z, Wei R, Chen Y, Tavella D, Ribichini FL, Zhang S, Sharif F, Tu S

pubmed logopapersAug 2 2025
Accurate segmentation of renal arteries from X-ray angiography videos is crucial for evaluating renal sympathetic denervation (RDN) procedures but remains challenging due to dynamic changes in contrast concentration and vessel morphology across frames. The purpose of this study is to propose TCA-Net, a deep learning model that improves segmentation consistency by leveraging local and global contextual information in angiography videos. Our approach utilizes a novel deep learning framework that incorporates two key modules: a local temporal window vessel enhancement module and a global vessel refinement module (GVR). The local module fuses multi-scale temporal-spatial features to improve the semantic representation of vessels in the current frame, while the GVR module integrates decoupled attention strategies (video-level and object-level attention) and gating mechanisms to refine global vessel information and eliminate redundancy. To further improve segmentation consistency, a temporal perception consistency loss function is introduced during training. We evaluated our model using 195 renal artery angiography sequences for development and tested it on an external dataset from 44 patients. The results demonstrate that TCA-Net achieves an F1-score of 0.8678 for segmenting renal arteries, outperforming existing state-of-the-art segmentation methods. We present TCA-Net, a deep learning-based model that significantly improves segmentation consistency for renal artery angiography videos. By effectively leveraging both local and global temporal contextual information, TCA-Net outperforms current methods and provides a reliable tool for assessing RDN procedures.

Evaluating the Efficacy of Various Deep Learning Architectures for Automated Preprocessing and Identification of Impacted Maxillary Canines in Panoramic Radiographs.

Alenezi O, Bhattacharjee T, Alseed HA, Tosun YI, Chaudhry J, Prasad S

pubmed logopapersAug 2 2025
Previously, automated cropping and a reasonable classification accuracy for distinguishing impacted and non-impacted canines were demonstrated. This study evaluates multiple convolutional neural network (CNN) architectures for improving accuracy as a step towards a fully automated software for identification of impacted maxillary canines (IMCs) in panoramic radiographs (PRs). Eight CNNs (SqueezeNet, GoogLeNet, NASNet-Mobile, ShuffleNet, VGG-16, ResNet 50, DenseNet 201, and Inception V3) were compared in terms of their ability to classify 2 groups of PRs (impacted: n = 91; and non-impacted: n = 91 maxillary canines) before pre-processing and after applying automated cropping. For the PRs with impacted and non-impacted maxillary canines, GoogLeNet achieved the highest classification performance among the tested CNN architectures. Area under the curve (AUC) values of the Receiver Operating Characteristic (ROC) analysis without preprocessing and with preprocessing were 0.9 and 0.99 respectively, compared to 0.84 and 0.96 respectively with SqueezeNet. Among the tested CNN architectures, GoogLeNet achieved the highest performance on this dataset for the automated identification of impacted maxillary canines on both cropped and uncropped PRs.

Artificial Intelligence in Abdominal, Gynecological, Obstetric, Musculoskeletal, Vascular and Interventional Ultrasound.

Graumann O, Cui Xin W, Goudie A, Blaivas M, Braden B, Campbell Westerway S, Chammas MC, Dong Y, Gilja OH, Hsieh PC, Jiang Tian A, Liang P, Möller K, Nolsøe CP, Săftoiu A, Dietrich CF

pubmed logopapersAug 2 2025
Artificial Intelligence (AI) is a theoretical framework and systematic development of computational models designed to execute tasks that traditionally require human cognition. In medical imaging, AI is used for various modalities, such as computed tomography (CT), magnetic resonance imaging (MRI), ultrasound, and pathologies across multiple organ systems. However, integrating AI into medical ultrasound presents unique challenges compared to modalities like CT and MRI due to its operator-dependent nature and inherent variability in the image acquisition process. AI application to ultrasound holds the potential to mitigate multiple variabilities, recalibrate interpretative consistency, and uncover diagnostic patterns that may be difficult for humans to detect. Progress has led to significant innovation in medical ultrasound-based AI applications, facilitating their adoption in various clinical settings and for multiple diseases. This manuscript primarily aims to provide a concise yet comprehensive exploration of current and emerging AI applications in medical ultrasound within abdominal, musculoskeletal, and obstetric & gynecological and interventional medical ultrasound. The secondary aim is to discuss present limitations and potential challenges such technological implementations may encounter.

[Tips and tricks for the cytological management of cysts].

Lacoste-Collin L, Fabre M

pubmed logopapersAug 2 2025
Fine needle aspiration is a well-known procedure for the diagnosis and management of solid lesions. The approach to cystic lesions on fine needle-aspiration is becoming a popular diagnostic tool due to the increased availability of high-quality cross-sectional imaging such as computed tomography and ultrasound guided procedures like endoscopic ultrasound. Cystic lesions are closed cavities containing liquid, sometimes partially solid with various internal neoplastic and non-neoplastic components. The most frequently punctured cysts are in the neck (thyroid and salivary glands), mediastinum, breast and abdomen (pancreas and liver). The diagnostic accuracy of cytological cyst sampling is highly dependent on laboratory material management. This review highlights how to approach the main features of superficial and deep organ cysts using basic cytological techniques (direct smears, cytocentrifugation), liquid-based cytology and cell block. We show the role of a multimodal approach that can lead to a wider implementation of ancillary tests (biochemical, immunocytochemical and molecular) to improve diagnostic accuracy and clinical management of patients with cystic lesions. In the near future, artificial intelligence models will offer detection, classification and prediction capabilities for various cystic lesions. Two examples in pancreatic and thyroid cytopathology are particularly developed.
Page 36 of 3023020 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.