Sort by:
Page 25 of 45442 results

Enhanced pulmonary nodule detection with U-Net, YOLOv8, and swin transformer.

Wang X, Wu H, Wang L, Chen J, Li Y, He X, Chen T, Wang M, Guo L

pubmed logopapersJul 1 2025
Lung cancer remains the leading cause of cancer-related mortality worldwide, emphasizing the critical need for early pulmonary nodule detection to improve patient outcomes. Current methods encounter challenges in detecting small nodules and exhibit high false positive rates, placing an additional diagnostic burden on radiologists. This study aimed to develop a two-stage deep learning model integrating U-Net, Yolov8s, and the Swin transformer to enhance pulmonary nodule detection in computer tomography (CT) images, particularly for small nodules, with the goal of improving detection accuracy and reducing false positives. We utilized the LUNA16 dataset (888 CT scans) and an additional 308 CT scans from Tianjin Chest Hospital. Images were preprocessed for consistency. The proposed model first employs U-Net for precise lung segmentation, followed by Yolov8s augmented with the Swin transformer for nodule detection. The Shape-aware IoU (SIoU) loss function was implemented to improve bounding box predictions. For the LUNA16 dataset, the model achieved a precision of 0.898, a recall of 0.851, and a mean average precision at 50% IoU (mAP50) of 0.879, outperforming state-of-the-art models. The Tianjin Chest Hospital dataset has a precision of 0.855, a recall of 0.872, and an mAP50 of 0.862. This study presents a two-stage deep learning model that leverages U-Net, Yolov8s, and the Swin transformer for enhanced pulmonary nodule detection in CT images. The model demonstrates high accuracy and a reduced false positive rate, suggesting its potential as a useful tool for early lung cancer diagnosis and treatment.

Performance of artificial intelligence in evaluating maxillary sinus mucosal alterations in imaging examinations: systematic review.

Moreira GC, do Carmo Ribeiro CS, Verner FS, Lemos CAA

pubmed logopapersJul 1 2025
This systematic review aimed to assess the performance of artificial intelligence (AI) in the evaluation of maxillary sinus mucosal alterations in imaging examinations compared to human analysis. Studies that presented radiographic images for the diagnosis of paranasal sinus diseases, as well as control groups for AI, were included. Articles that performed tests on animals, presented other conditions, surgical methods, did not present data on the diagnosis of MS or on the outcomes of interest (area under the curve, sensitivity, specificity, and accuracy), compared the outcome only among different AIs were excluded. Searches were conducted in 5 electronic databases and a gray literature. The risk of bias (RB) was assessed using the QUADAS-2 and the certainty of evidence by GRADE. Six studies were included. The type of study considered was retrospective observational; with serious RB, and a considerable heterogeneity in methodologies. The IA presents similar results to humans, however, imprecision was assessed as serious for the outcomes and the certainty of evidence was classified as very low according to the GRADE approach. Furthermore, a dose-response effect was determined, as specialists demonstrate greater mastery of the diagnosis of MS when compared to resident professionals or general clinicians. Considering the outcomes, the AI represents a complementary tool for assessing maxillary mucosal alterations, especially considering professionals with less experience. Finally, performance analysis and definition of comparison parameters should be encouraged considering future research perspectives. AI is a potential complementary tool for assessing maxillary sinus mucosal alterations, however studies are still lacking methodological standardization.

External Validation of an Artificial Intelligence Algorithm Using Biparametric MRI and Its Simulated Integration with Conventional PI-RADS for Prostate Cancer Detection.

Belue MJ, Mukhtar V, Ram R, Gokden N, Jose J, Massey JL, Biben E, Buddha S, Langford T, Shah S, Harmon SA, Turkbey B, Aydin AM

pubmed logopapersJul 1 2025
Prostate imaging reporting and data systems (PI-RADS) experiences considerable variability in inter-reader performance. Artificial Intelligence (AI) algorithms were suggested to provide comparable performance to PI-RADS for assessing prostate cancer (PCa) risk, albeit tested in highly selected cohorts. This study aimed to assess an AI algorithm for PCa detection in a clinical practice setting and simulate integration of the AI model with PI-RADS for assessment of indeterminate PI-RADS 3 lesions. This retrospective cohort study externally validated a biparametric MRI-based AI model for PCa detection in a consecutive cohort of patients who underwent prostate MRI and subsequently targeted and systematic prostate biopsy at a urology clinic between January 2022 and March 2024. Radiologist interpretations followed PI-RADS v2.1, and biopsies were conducted per PI-RADS scores. The previously developed AI model provided lesion segmentations and cancer probability maps which were compared to biopsy results. Additionally, we conducted a simulation to adjust biopsy thresholds for index PI-RADS category 3 studies, where AI predictions within these studies upgraded them to PI-RADS category 4. Among 144 patients with a median age of 70 years and PSA density of 0.17ng/mL/cc, AI's sensitivity for detection of PCa (86.6%) and clinically significant PCa (csPCa, 88.4%) was comparable to radiologists (85.7%, p=0.84, and 89.5%, p=0.80, respectively). The simulation combining radiologist and AI evaluations improved clinically significant PCa sensitivity by 5.8% (p=0.025). The combination of AI, PI-RADS and PSA density provided the best diagnostic performance for csPCa (area under the curve [AUC]=0.76). The AI algorithm demonstrated comparable PCa detection rates to PI-RADS. The combination of AI with radiologist interpretation improved sensitivity and could be instrumental in assessment of low-risk and indeterminate PI-RADS lesions. The role of AI in PCa screening remains to be further elucidated.

Artificial Intelligence in CT Angiography for the Detection of Coronary Artery Stenosis and Calcified Plaque: A Systematic Review and Meta-analysis.

Du M, He S, Liu J, Yuan L

pubmed logopapersJul 1 2025
We aimed to evaluate the diagnostic performance of artificial intelligence (AI) in detecting coronary artery stenosis and calcified plaque on CT angiography (CTA), comparing its diagnostic performance with that of radiologists. A thorough search of the literature was performed using PubMed, Web of Science, and Embase, focusing on studies published until October 2024. Studies were included if they evaluated AI models in detecting coronary artery stenosis and calcified plaque on CTA. A bivariate random-effects model was employed to determine combined sensitivity and specificity. Study heterogeneity was assessed using I<sup>2</sup> statistics. The risk of bias was assessed using the revised quality assessment of diagnostic accuracy studies-2 tool, and the evidence level was graded using the Grading of Recommendations Assessment, Development and Evalutiuon (GRADE) system. Out of 1071 initially identified studies, 17 studies with 5560 patients and images were ultimately included for the final analysis. For coronary artery stenosis ≥50%, AI showed a sensitivity of 0.92 (95% CI: 0.88-0.95), specificity of 0.87 (95% CI: 0.80-0.92), and AUC of 0.96 (95% CI: 0.94-0.97), outperforming radiologists with sensitivity of 0.85 (95% CI: 0.67-0.94), specificity of 0.84 (95% CI: 0.62-0.94), and AUC of 0.91 (95% CI: 0.89-0.93). For stenosis ≥70%, AI achieved a sensitivity of 0.88 (95% CI: 0.70-0.96), specificity of 0.96 (95% CI: 0.90-0.99), and AUC of 0.98 (95% CI: 0.96-0.99). In calcified plaque detection, AI demonstrated a sensitivity of 0.93 (95% CI: 0.84-0.97), specificity of 0.94 (95% CI: 0.88-0.96), and AUC of 0.98 (95% CI: 0.96-0.99)." AI-based CT demonstrated superior diagnostic performance compared to clinicians in identifying ≥50% stenosis in coronary arteries and showed excellent diagnostic performance in recognizing ≥70% coronary artery stenosis and calcified plaque. However, limitations include retrospective study designs and heterogeneity in CTA technologies. Further external validation through prospective, multicenter trials is required to confirm these findings. The original findings of this research are included in the article. For additional inquiries, please contact the corresponding authors.

ResNet-Transformer deep learning model-aided detection of dens evaginatus.

Wang S, Liu J, Li S, He P, Zhou X, Zhao Z, Zheng L

pubmed logopapersJul 1 2025
Dens evaginatus is a dental morphological developmental anomaly. Failing to detect it may lead to tubercles fracture and pulpal/periapical disease. Consequently, early detection and intervention of dens evaginatus are significant to preserve vital pulp. This study aimed to develop a deep learning model to assist dentists in early diagnosing dens evaginatus, thereby supporting early intervention and mitigating the risk of severe consequences. In this study, a deep learning model was developed utilizing panoramic radiograph images sourced from 1410 patients aged 3-16 years, with high-quality annotations to enable the automatic detection of dens evaginatus. Model performance and model's efficacy in aiding dentists were evaluated. The findings indicated that the current deep learning model demonstrated commendable sensitivity (0.8600) and specificity (0.9200), outperforming dentists in detecting dens evaginatus with an F1-score of 0.8866 compared to their average F1-score of 0.8780, indicating that the model could detect dens evaginatus with greater precision. Furthermore, with its support, young dentists heightened their focus on dens evaginatus in tooth germs and achieved improved diagnostic accuracy. Based on these results, the integration of deep learning for dens evaginatus detection holds significance and can augment dentists' proficiency in identifying such anomaly.

Implementing an AI algorithm in the clinical setting: a case study for the accuracy paradox.

Scaringi JA, McTaggart RA, Alvin MD, Atalay M, Bernstein MH, Jayaraman MV, Jindal G, Movson JS, Swenson DW, Baird GL

pubmed logopapersJul 1 2025
We report our experience implementing an algorithm for the detection of large vessel occlusion (LVO) for suspected stroke in the emergency setting, including its performance, and offer an explanation as to why it was poorly received by radiologists. An algorithm was deployed in the emergency room at a single tertiary care hospital for the detection of LVO on CT angiography (CTA) between September 1st-27th, 2021. A retrospective analysis of the algorithm's accuracy was performed. During the study period, 48 patients underwent CTA examination in the emergency department to evaluate for emergent LVO, with 2 positive cases (60.3 years ± 18.2; 32 women). The LVO algorithm demonstrated a sensitivity and specificity of 100% and 92%, respectively. While the sensitivity of the algorithm at our institution was even higher than the manufacturer's reported values, the false discovery rate was 67%, leading to the perception that the algorithm was inaccurate. In addition, the positive predictive value at our institution was 33% compared with the manufacturer's reported values of 95-98%. This disparity can be attributed to differences in disease prevalence of 4.1% at our institution compared with 45.0-62.2% from the manufacturer's reported values. Despite the LVO algorithm's accuracy performing as advertised, it was perceived as inaccurate due to more false positives than anticipated and was removed from clinical practice. This was likely due to a cognitive bias called the accuracy paradox. To mitigate the accuracy paradox, radiologists should be presented with metrics based on a disease prevalence similar to their practice when evaluating and utilizing artificial intelligence tools. Question An artificial intelligence algorithm for detecting emergent LVOs was implemented in an emergency department, but it was perceived to be inaccurate. Findings Although the algorithm's accuracy was both high and as advertised, the algorithm demonstrated a high false discovery rate. Clinical relevance The misperception of the algorithm's inaccuracy was likely due to a special case of the base rate fallacy-the accuracy paradox. Equipping radiologists with an algorithm's false discovery rate based on local prevalence will ensure realistic expectations for real-world performance.

AI-Driven insights in pancreatic cancer imaging: from pre-diagnostic detection to prognostication.

Antony A, Mukherjee S, Bi Y, Collisson EA, Nagaraj M, Murlidhar M, Wallace MB, Goenka AH

pubmed logopapersJul 1 2025
Pancreatic ductal adenocarcinoma (PDAC) is the third leading cause of cancer-related deaths in the United States, largely due to its poor five-year survival rate and frequent late-stage diagnosis. A significant barrier to early detection even in high-risk cohorts is that the pancreas often appears morphologically normal during the pre-diagnostic phase. Yet, the disease can progress rapidly from subclinical stages to widespread metastasis, undermining the effectiveness of screening. Recently, artificial intelligence (AI) applied to cross-sectional imaging has shown significant potential in identifying subtle, early-stage changes in pancreatic tissue that are often imperceptible to the human eye. Moreover, AI-driven imaging also aids in the discovery of prognostic and predictive biomarkers, essential for personalized treatment planning. This article uniquely integrates a critical discussion on AI's role in detecting visually occult PDAC on pre-diagnostic imaging, addresses challenges of model generalizability, and emphasizes solutions like standardized datasets and clinical workflows. By focusing on both technical advancements and practical implementation, this article provides a forward-thinking conceptual framework that bridges current gaps in AI-driven PDAC research.

Added value of artificial intelligence for the detection of pelvic and hip fractures.

Jaillat A, Cyteval C, Baron Sarrabere MP, Ghomrani H, Maman Y, Thouvenin Y, Pastor M

pubmed logopapersJul 1 2025
To assess the added value of artificial intelligence (AI) for radiologists and emergency physicians in the radiographic detection of pelvic fractures. In this retrospective study, one junior radiologist reviewed 940 X-rays of patients admitted to emergency for a fall with suspicion of pelvic fracture between March 2020 and June 2021. The radiologist analyzed the X-rays alone and then using an AI system (BoneView). In a random sample of 100 exams, the same procedure was repeated alongside five other readers (three radiologists and two emergency physicians with 3-30 years of experience). The reference diagnosis was based on the patient's full set of medical imaging exams and medical records in the months following emergency admission. A total of 633 confirmed pelvic fractures (64.8% from hip and 35.2% from pelvic ring) in 940 patients and 68 pelvic fractures (60% from hip and 40% from pelvic ring) in the 100-patient sample were included. In the whole dataset, the junior radiologist achieved a significant sensitivity improvement with AI assistance (Se<sub>-PELVIC</sub> = 77.25% to 83.73%; p < 0.001, Se<sub>-HIP</sub> 93.24 to 96.49%; p < 0.001 and Se<sub>-PELVIC RING</sub> 54.60% to 64.50%; p < 0.001). However, there was a significant decrease in specificity with AI assistance (Spe<sub>-PELVIC</sub> = 95.24% to 93.25%; p = 0.005 and Spe<sub>-HIP</sub> = 98.30% to 96.90%; p = 0.005). In the 100-patient sample, the two emergency physicians obtained an improvement in fracture detection sensitivity across the pelvic area + 14.70% (p = 0.0011) and + 10.29% (p < 0.007) respectively without a significant decrease in specificity. For hip fractures, E1's sensitivity increased from 59.46% to 70.27% (p = 0.04), and E2's sensitivity increased from 78.38% to 86.49% (p = 0.08). For pelvic ring fractures, E1's sensitivity increased from 12.90% to 32.26% (p = 0.012), and E2's sensitivity increased from 19.35% to 32.26% (p = 0.043). AI improved the diagnostic performance for emergency physicians and radiologists with limited experience in pelvic fracture screening.

Improved unsupervised 3D lung lesion detection and localization by fusing global and local features: Validation in 3D low-dose computed tomography.

Lee JH, Oh SJ, Kim K, Lim CY, Choi SH, Chung MJ

pubmed logopapersJul 1 2025
Unsupervised anomaly detection (UAD) is crucial in low-dose computed tomography (LDCT). Recent AI technologies, leveraging global features, have enabled effective UAD with minimal training data of normal patients. However, this approach, devoid of utilizing local features, exhibits vulnerability in detecting deep lesions within the lungs. In other words, while the conventional use of global features can achieve high specificity, it often comes with limited sensitivity. Developing a UAD AI model with high sensitivity is essential to prevent false negatives, especially in screening patients with diseases demonstrating high mortality rates. We have successfully pioneered a new LDCT UAD AI model that leverages local features, achieving a previously unattainable increase in sensitivity compared to global methods (17.5% improvement). Furthermore, by integrating this approach with conventional global-based techniques, we have successfully consolidated the advantages of each model - high sensitivity from the local model and high specificity from the global model - into a single, unified, trained model (17.6% and 33.5% improvement, respectively). Without the need for additional training, we anticipate achieving significant diagnostic efficacy in various LDCT applications, where both high sensitivity and specificity are essential, using our fixed model. Code is available at https://github.com/kskim-phd/Fusion-UADL.

Comparison of CNNs and Transformer Models in Diagnosing Bone Metastases in Bone Scans Using Grad-CAM.

Pak S, Son HJ, Kim D, Woo JY, Yang I, Hwang HS, Rim D, Choi MS, Lee SH

pubmed logopapersJul 1 2025
Convolutional neural networks (CNNs) have been studied for detecting bone metastases on bone scans; however, the application of ConvNeXt and transformer models has not yet been explored. This study aims to evaluate the performance of various deep learning models, including the ConvNeXt and transformer models, in diagnosing metastatic lesions from bone scans. We retrospectively analyzed bone scans from patients with cancer obtained at 2 institutions: the training and validation sets (n=4626) were from Hospital 1 and the test set (n=1428) was from Hospital 2. The deep learning models evaluated included ResNet18, the Data-Efficient Image Transformer (DeiT), the Vision Transformer (ViT Large 16), the Swin Transformer (Swin Base), and ConvNeXt Large. Gradient-weighted class activation mapping (Grad-CAM) was used for visualization. Both the validation set and the test set demonstrated that the ConvNeXt large model (0.969 and 0.885, respectively) exhibited the best performance, followed by the Swin Base model (0.965 and 0.840, respectively), both of which significantly outperformed ResNet (0.892 and 0.725, respectively). Subgroup analyses revealed that all the models demonstrated greater diagnostic accuracy for patients with polymetastasis compared with those with oligometastasis. Grad-CAM visualization revealed that the ConvNeXt Large model focused more on identifying local lesions, whereas the Swin Base model focused on global areas such as the axial skeleton and pelvis. Compared with traditional CNN and transformer models, the ConvNeXt model demonstrated superior diagnostic performance in detecting bone metastases from bone scans, especially in cases of polymetastasis, suggesting its potential in medical image analysis.
Page 25 of 45442 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.