Sort by:
Page 31 of 2982975 results

The Use of Artificial Intelligence to Improve Detection of Acute Incidental Pulmonary Emboli.

Kuzo RS, Levin DL, Bratt AK, Walkoff LA, Suman G, Houghton DE

pubmed logopapersAug 4 2025
Incidental pulmonary emboli (IPE) are frequently overlooked by radiologists. Artificial intelligence (AI) algorithms have been developed to aid detection of pulmonary emboli. To measure diagnostic performance of AI compared with prospective interpretation by radiologists. A commercially available AI algorithm was used to retrospectively review 14,453 contrast-enhanced outpatient CT CAP exams in 9171 patients where PE was not clinically suspected. Natural language processing (NLP) searches of reports identified IPE detected prospectively. Thoracic radiologists reviewed all cases read as positive by AI or NLP to confirm IPE and assess the most proximal level of clot and overall clot burden. 1,400 cases read as negative by both the initial radiologist and AI were re-reviewed to assess for additional IPE. Radiologists prospectively detected 218 IPE and AI detected an additional 36 unreported cases. AI missed 30 cases of IPE detected by the radiologist and had 94 false positives. For 36 IPE missed by the radiologist, median clot burden was 1 and 19 were solitary segmental or subsegmental. For 30 IPE missed by AI, one case had large central emboli and the others were small with 23 solitary subsegmental emboli. Radiologist re-review of 1,400 exams interpreted as negative found 8 additional cases of IPE. Compared with radiologists, AI had similar sensitivity but reduced positive predictive value. Our experience indicates that the AI tool is not ready to be used autonomously without human oversight, but a human observer plus AI is better than either alone for detection of incidental pulmonary emboli.

Artificial intelligence: a new era in prostate cancer diagnosis and treatment.

Vidiyala N, Parupathi P, Sunkishala P, Sree C, Gujja A, Kanagala P, Meduri SK, Nyavanandi D

pubmed logopapersAug 4 2025
Prostate cancer (PCa) represents one of the most prevalent cancers among men, with substantial challenges in timely and accurate diagnosis and subsequent treatment. Traditional diagnosis and treatment methods for PCa, such as prostate-specific antigen (PSA) biomarker detection, digital rectal examination, imaging (CT/MRI) analysis, and biopsy histopathological examination, suffer from limitations such as a lack of specificity, generation of false positives or negatives, and difficulty in handling large data, leading to overdiagnosis and overtreatment. The integration of artificial intelligence (AI) in PCa diagnosis and treatment is revolutionizing traditional approaches by offering advanced tools for early detection, personalized treatment planning, and patient management. AI technologies, especially machine learning and deep learning, improve diagnostic accuracy and treatment planning. The AI algorithms analyze imaging data, like MRI and ultrasound, to identify cancerous lesions effectively with great precision. In addition, AI algorithms enhance risk assessment and prognosis by combining clinical, genomic, and imaging data. This leads to more tailored treatment strategies, enabling informed decisions about active surveillance, surgery, or new therapies, thereby improving quality of life while reducing unnecessary diagnoses and treatments. This review examines current AI applications in PCa care, focusing on their transformative impact on diagnosis and treatment planning while recognizing potential challenges. It also outlines expected improvements in diagnosis through AI-integrated systems and decision support tools for healthcare teams. The findings highlight AI's potential to enhance clinical outcomes, operational efficiency, and patient-centred care in managing PCa.

ESR Essentials: common performance metrics in AI-practice recommendations by the European Society of Medical Imaging Informatics.

Klontzas ME, Groot Lipman KBW, Akinci D' Antonoli T, Andreychenko A, Cuocolo R, Dietzel M, Gitto S, Huisman H, Santinha J, Vernuccio F, Visser JJ, Huisman M

pubmed logopapersAug 3 2025
This article provides radiologists with practical recommendations for evaluating AI performance in radiology, ensuring alignment with clinical goals and patient safety. It outlines key performance metrics, including overlap metrics for segmentation, test-based metrics (e.g., sensitivity, specificity, and area under the receiver operating characteristic curve), and outcome-based metrics (e.g., precision, negative predictive value, F1-score, Matthews correlation coefficient, and area under the precision-recall curve). Key recommendations emphasize local validation using independent datasets, selecting task-specific metrics, and considering deployment context to ensure real-world performance matches claimed efficacy. Common pitfalls, such as overreliance on a single metric, misinterpretation in low-prevalence settings, and failure to account for clinical workflow, are addressed with mitigation strategies. Additional guidance is provided on threshold selection, prevalence-adjusted evaluation, and AI-generated image quality assessment. This guide equips radiologists to critically evaluate both commercially available and in-house developed AI tools, ensuring their safe and effective integration into clinical practice. CLINICAL RELEVANCE STATEMENT: This review provides guidance on selecting and interpreting AI performance metrics in radiology, ensuring clinically meaningful evaluation and safe deployment of AI tools. By addressing common pitfalls and promoting standardized reporting, it supports radiologists in making informed decisions, ultimately improving diagnostic accuracy and patient outcomes. KEY POINTS: Radiologists must evaluate performance metrics as they reflect acceptable performance in specific datasets but do not guarantee clinical utility. Independent evaluation tailored to the clinical setting is essential. Performance metrics must align with the intended task of the AI application-segmentation, detection, or classification-and be selected based on domain knowledge and clinical context. Sensitivity, specificity, area under the ROC curve, and accuracy must be interpreted with prevalence-dependent metrics (e.g., precision, F1 score, and Matthew's correlation coefficient) calculated for the target population to ensure safe and effective clinical use.

External evaluation of an open-source deep learning model for prostate cancer detection on bi-parametric MRI.

Johnson PM, Tong A, Ginocchio L, Del Hoyo JL, Smereka P, Harmon SA, Turkbey B, Chandarana H

pubmed logopapersAug 3 2025
This study aims to evaluate the diagnostic accuracy of an open-source deep learning (DL) model for detecting clinically significant prostate cancer (csPCa) in biparametric MRI (bpMRI). It also aims to outline the necessary components of the model that facilitate effective sharing and external evaluation of PCa detection models. This retrospective diagnostic accuracy study evaluated a publicly available DL model trained to detect PCa on bpMRI. External validation was performed on bpMRI exams from 151 biologically male patients (mean age, 65 ± 8 years). The model's performance was evaluated using patient-level classification of PCa with both radiologist interpretation and histopathology serving as the ground truth. The model processed bpMRI inputs to generate lesion probability maps. Performance was assessed using the area under the receiver operating characteristic curve (AUC) for PI-RADS ≥ 3, PI-RADS ≥ 4, and csPCa (defined as Gleason ≥ 7) at an exam level. The model achieved AUCs of 0.86 (95% CI: 0.80-0.92) and 0.91 (95% CI: 0.85-0.96) for predicting PI-RADS ≥ 3 and ≥ 4 exams, respectively, and 0.78 (95% CI: 0.71-0.86) for csPCa. Sensitivity and specificity for csPCa were 0.87 and 0.53, respectively. Fleiss' kappa for inter-reader agreement was 0.51. The open-source DL model offers high sensitivity to clinically significant prostate cancer. The study underscores the importance of sharing model code and weights to enable effective external validation and further research. Question Inter-reader variability hinders the consistent and accurate detection of clinically significant prostate cancer in MRI. Findings An open-source deep learning model demonstrated reproducible diagnostic accuracy, achieving AUCs of 0.86 for PI-RADS ≥ 3 and 0.78 for CsPCa lesions. Clinical relevance The model's high sensitivity for MRI-positive lesions (PI-RADS ≥ 3) may provide support for radiologists. Its open-source deployment facilitates further development and evaluation across diverse clinical settings, maximizing its potential utility.

Adapting foundation models for rapid clinical response: intracerebral hemorrhage segmentation in emergency settings.

Gerbasi A, Mazzacane F, Ferrari F, Del Bello B, Cavallini A, Bellazzi R, Quaglini S

pubmed logopapersAug 3 2025
Intracerebral hemorrhage (ICH) is a medical emergency that demands rapid and accurate diagnosis for optimal patient management. Hemorrhagic lesions' segmentation on CT scans is a necessary first step for acquiring quantitative imaging data that are becoming increasingly useful in the clinical setting. However, traditional manual segmentation is time-consuming and prone to inter-rater variability, creating a need for automated solutions. This study introduces a novel approach combining advanced deep learning models to segment extensive and morphologically variable ICH lesions in non-contrast CT scans. We propose a two-step methodology that begins with a user-defined loose bounding box around the lesion, followed by a fine-tuned YOLOv8-S object detection model to generate precise, slice-specific bounding boxes. These bounding boxes are then used to prompt the Medical Segment Anything Model for accurate lesion segmentation. Our pipeline achieves high segmentation accuracy with minimal supervision, demonstrating strong potential as a practical alternative to task-specific models. We evaluated the model on a dataset of 252 CT scans demonstrating high performance in segmentation accuracy and robustness. Finally, the resulting segmentation tool is integrated into a user-friendly web application prototype, offering clinicians a simple interface for lesion identification and radiomic quantification.

The dosimetric impacts of ct-based deep learning autocontouring algorithm for prostate cancer radiotherapy planning dosimetric accuracy of DirectORGANS.

Dinç SÇ, Üçgül AN, Bora H, Şentürk E

pubmed logopapersAug 2 2025
In study, we aimed to dosimetrically evaluate the usability of a new generation autocontouring algorithm (DirectORGANS) that automatically identifies organs and contours them directly in the computed tomography (CT) simulator before creating prostate radiotherapy plans. The CT images of 10 patients were used in this study. The prostates, bladder, rectum, and femoral heads of 10 patients were automatically contoured based on DirectORGANS algorithm at the CT simulator. On the same CT image sets, the same target volumes and contours of organs at risk were manually contoured by an experienced physician using MRI images and used as a reference structure. The doses of manually delineated contours of the target volume and organs at risk and the doses of auto contours of the target volume and organs at risk were obtained from the dose volume histogram of the same plan. Conformity index (CI) and homogeneity index (HI) were calculated to evaluate the target volumes. In critical organ structures, V<sub>60,</sub> V<sub>65,</sub> V<sub>70</sub> for the rectum, V<sub>65,</sub> V70, V75, and V<sub>80</sub> for the bladder, and maximum doses for femoral heads were evaluated. The Mann-Whitney U test was used for statistical comparison with statistical package SPSS (P < 0.05). Compared to the doses of the manual contours (MC) with auto contours (AC), there was no significant difference between the doses of the organs at risk. However, there were statistically significant differences between HI and CI values due to differences in prostate contouring (P < 0.05). The study showed that the need for clinicians to edit target volumes using MRI before treatment planning. However, it demonstrated that delineating organs at risk was used safely without the need for correction. DirectORGANS algorithm is suitable for use in RT planning to minimize differences between physicians and shorten the duration of this contouring step.

Transfer learning based deep architecture for lung cancer classification using CT image with pattern and entropy based feature set.

R N, C M V

pubmed logopapersAug 2 2025
Early detection of lung cancer, which remains one of the leading causes of death worldwide, is important for improved prognosis, and CT scanning is an important diagnostic modality. Lung cancer classification according to CT scan is challenging since the disease is characterized by very variable features. A hybrid deep architecture, ILN-TL-DM, is presented in this paper for precise classification of lung cancer from CT scan images. Initially, an Adaptive Gaussian filtering method is applied during pre-processing to eliminate noise and enhance the quality of the CT image. This is followed by an Improved Attention-based ResU-Net (P-ResU-Net) model being utilized during the segmentation process to accurately isolate the lung and tumor areas from the remaining image. During the process of feature extraction, various features are derived from the segmented images, such as Local Gabor Transitional Pattern (LGTrP), Pyramid of Histograms of Oriented Gradients (PHOG), deep features and improved entropy-based features, all intended to improve the representation of the tumor areas. Finally, classification exploits a hybrid deep learning architecture integrating an improved LeNet structure with Transfer Learning (ILN-TL) and a DeepMaxout (DM) structure. Both model outputs are finally merged with the help of a soft voting strategy, which results in the final classification result that separates cancerous and non-cancerous tissues. The strategy greatly enhances lung cancer detection's accuracy and strength, showcasing how combining sophisticated neural network structures with feature engineering and ensemble methods could be used to achieve better medical image classification. The ILN-TL-DM model consistently outperforms the conventional methods with greater accuracy (0.962), specificity (0.955) and NPV (0.964).

AI enhanced diagnostic accuracy and workload reduction in hepatocellular carcinoma screening.

Lu RF, She CY, He DN, Cheng MQ, Wang Y, Huang H, Lin YD, Lv JY, Qin S, Liu ZZ, Lu ZR, Ke WP, Li CQ, Xiao H, Xu ZF, Liu GJ, Yang H, Ren J, Wang HB, Lu MD, Huang QH, Chen LD, Wang W, Kuang M

pubmed logopapersAug 2 2025
Hepatocellular carcinoma (HCC) ultrasound screening encounters challenges related to accuracy and the workload of radiologists. This retrospective, multicenter study assessed four artificial intelligence (AI) enhanced strategies using 21,934 liver ultrasound images from 11,960 patients to improve HCC ultrasound screening accuracy and reduce radiologist workload. UniMatch was used for lesion detection and LivNet for classification, trained on 17,913 images. Among the strategies tested, Strategy 4, which combined AI for initial detection and radiologist evaluation of negative cases in both detection and classification phases, outperformed others. It not only matched the high sensitivity of original algorithm (0.956 vs. 0.991) but also improved specificity (0.787 vs. 0.698), reduced radiologist workload by 54.5%, and decreased both recall and false positive rates. This approach demonstrates a successful model of human-AI collaboration, not only enhancing clinical outcomes but also mitigating unnecessary patient anxiety and system burden by minimizing recalls and false positives.

Integrating Time and Frequency Domain Features of fMRI Time Series for Alzheimer's Disease Classification Using Graph Neural Networks.

Peng W, Li C, Ma Y, Dai W, Fu D, Liu L, Liu L, Yu N, Liu J

pubmed logopapersAug 2 2025
Accurate and early diagnosis of Alzheimer's Disease (AD) is crucial for timely interventions and treatment advancement. Functional Magnetic Resonance Imaging (fMRI), measuring brain blood-oxygen level changes over time, is a powerful AD-diagnosis tool. However, current fMRI-based AD diagnosis methods rely on noise-susceptible time-domain features and focus only on synchronous brain-region interactions in the same time phase, neglecting asynchronous ones. To overcome these issues, we propose Frequency-Time Fusion Graph Neural Network (FTF-GNN). It integrates frequency- and time-domain features for robust AD classification, considering both asynchronous and synchronous brain-region interactions. First, we construct a fully connected hypervariate graph, where nodes represent brain regions and their Blood Oxygen Level-Dependent (BOLD) values at a time series point. A Discrete Fourier Transform (DFT) transforms these BOLD values from the spatial to the frequency domain for frequency-component analysis. Second, a Fourier-based Graph Neural Network (FourierGNN) processes the frequency features to capture asynchronous brain region connectivity patterns. Third, these features are converted back to the time domain and reshaped into a matrix where rows represent brain regions and columns represent their frequency-domain features at each time point. Each brain region then fuses its frequency-domain features with position encoding along the time series, preserving temporal and spatial information. Next, we build a brain-region network based on synchronous BOLD value associations and input the brain-region network and the fused features into a Graph Convolutional Network (GCN) to capture synchronous brain region connectivity patterns. Finally, a fully connected network classifies the brain-region features. Experiments on the Alzheimer's Disease Neuroimaging Initiative (ADNI) dataset demonstrate the method's effectiveness: Our model achieves 91.26% accuracy and 96.79% AUC in AD versus Normal Control (NC) classification, showing promising performance. For early-stage detection, it attains state-of-the-art performance in distinguishing NC from Late Mild Cognitive Impairment (LMCI) with 87.16% accuracy and 93.22% AUC. Notably, in the challenging task of differentiating LMCI from AD, FTF-GNN achieves optimal performance (85.30% accuracy, 94.56% AUC), while also delivering competitive results (77.40% accuracy, 91.17% AUC) in distinguishing Early MCI (EMCI) from LMCI-the most clinically complex subtype classification. These results indicate that leveraging complementary frequency- and time-domain information, along with considering asynchronous and synchronous brain-region interactions, can address existing approach limitations, offering a robust neuroimaging-based diagnostic solution.

Deep learning-driven incidental detection of vertebral fractures in cancer patients: advancing diagnostic precision and clinical management.

Mniai EM, Laletin V, Tselikas L, Assi T, Bonnet B, Camez AO, Zemmouri A, Muller S, Moussa T, Chaibi Y, Kiewsky J, Quenet S, Avare C, Lassau N, Balleyguier C, Ayobi A, Ammari S

pubmed logopapersAug 2 2025
Vertebral compression fractures (VCFs) are the most prevalent skeletal manifestations of osteoporosis in cancer patients. Yet, they are frequently missed or not reported in routine clinical radiology, adversely impacting patient outcomes and quality of life. This study evaluates the diagnostic performance of a deep-learning (DL)-based application and its potential to reduce the miss rate of incidental VCFs in a high-risk cancer population. We retrospectively analysed thoraco-abdomino-pelvic (TAP) CT scans from 1556 patients with stage IV cancer collected consecutively over a 4-month period (September-December 2023) in a tertiary cancer center. A DL-based application flagged cases positive for VCFs, which were subsequently reviewed by two expert radiologists for validation. Additionally, grade 3 fractures identified by the application were independently assessed by two expert interventional radiologists to determine their eligibility for vertebroplasty. Of the 1556 cases, 501 were flagged as positive for VCF by the application, with 436 confirmed as true positives by expert review, yielding a positive predictive value (PPV) of 87%. Common causes of false positives included sclerotic vertebral metastases, scoliosis, and vertebrae misidentification. Notably, 83.5% (364/436) of true positive VCFs were absent from radiology reports, indicating a substantial non-report rate in routine practice. Ten grade 3 fractures were overlooked or not reported by radiologists. Among them, 9 were deemed suitable for vertebroplasty by expert interventional radiologists. This study underscores the potential of DL-based applications to improve the detection of VCFs. The analyzed tool can assist radiologists in detecting more incidental vertebral fractures in adult cancer patients, optimising timely treatment and reducing associated morbidity and economic burden. Moreover, it might enhance patient access to interventional treatments such as vertebroplasty. These findings highlight the transformative role that DL can play in optimising clinical management and outcomes for osteoporosis-related VCFs in cancer patients.
Page 31 of 2982975 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.