Sort by:
Page 1 of 83825 results
Next

Single-step prediction of inferior alveolar nerve injury after mandibular third molar extraction using contrastive learning and bayesian auto-tuned deep learning model.

Yoon K, Choi Y, Lee M, Kim J, Kim JY, Kim JW, Choi J, Park W

pubmed logopapersSep 27 2025
Inferior alveolar nerve (IAN) injury is a critical complication of mandibular third molar extraction. This study aimed to construct and evaluate a deep learning framework that integrates contrastive learning and Bayesian optimization to enhance predictive performance on cone-beam computed tomography (CBCT) and panoramic radiographs. A retrospective dataset of 902 panoramic radiographs and 1,500 CBCT images was used. Five deep learning architectures (MobileNetV2, ResNet101D, Vision Transformer, Twins-SVT, and SSL-ResNet50) were trained with and without contrastive learning and Bayesian optimization. Model performance was evaluated using accuracy, F1-score, and comparison with oral and maxillofacial surgeons (OMFSs). Contrastive learning significantly improved the F1-scores across all models (e.g., MobileNetV2: 0.302 to 0.740; ResNet101D: 0.188 to 0.689; Vision Transformer: 0.275 to 0.704; Twins-SVT: 0.370 to 0.719; SSL-ResNet50: 0.109 to 0.576). Bayesian optimization further enhanced the F1-scores for MobileNetV2 (from 0.740 to 0.923), ResNet101D (from 0.689 to 0.857), Vision Transformer (from 0.704 to 0.871), Twins-SVT (from 0.719 to 0.857), and SSL-ResNet50 (from 0.576 to 0.875). The AI model outperformed OMFSs on CBCT cross-sectional images (F1-score: 0.923 vs. 0.667) but underperformed on panoramic radiographs (0.666 vs. 0.730). The proposed single-step deep learning approach effectively predicts IAN injury, with contrastive learning addressing data imbalance and Bayesian optimization optimizing model performance. While artificial intelligence surpasses human performance in CBCT images, panoramic radiographs analysis still benefits from expert interpretation. Future work should focus on multi-center validation and explainable artificial intelligence for broader clinical adoption.

COVID-19 Pneumonia Diagnosis Using Medical Images: Deep Learning-Based Transfer Learning Approach.

Dharmik A

pubmed logopapersSep 26 2025
SARS-CoV-2, the causative agent of COVID-19, remains a global health concern due to its high transmissibility and evolving variants. Although vaccination efforts and therapeutic advancements have mitigated disease severity, emerging mutations continue to challenge diagnostics and containment strategies. As of mid-February 2025, global test positivity has risen to 11%, marking the highest level in over 6 months, despite widespread immunization efforts. Newer variants demonstrate enhanced host cell binding, increasing both infectivity and diagnostic complexity. This study aimed to evaluate the effectiveness of deep transfer learning in delivering a rapid, accurate, and mutation-resilient COVID-19 diagnosis from medical imaging, with a focus on scalability and accessibility. An automated detection system was developed using state-of-the-art convolutional neural networks, including VGG16 (Visual Geometry Group network-16 layers), ResNet50 (residual network-50 layers), ConvNeXtTiny (convolutional next-tiny), MobileNet (mobile network), NASNetMobile (neural architecture search network-mobile version), and DenseNet121 (densely connected convolutional network-121 layers), to detect COVID-19 from chest X-ray and computed tomography (CT) images. Among all the models evaluated, DenseNet121 emerged as the best-performing architecture for COVID-19 diagnosis using X-ray and CT images. It achieved an impressive accuracy of 98%, with a precision of 96.9%, a recall of 98.9%, an F1-score of 97.9%, and an area under the curve score of 99.8%, indicating a high degree of consistency and reliability in detecting both positive and negative cases. The confusion matrix showed minimal false positives and false negatives, underscoring the model's robustness in real-world diagnostic scenarios. Given its performance, DenseNet121 is a strong candidate for deployment in clinical settings and serves as a benchmark for future improvements in artificial intelligence-assisted diagnostic tools. The study results underscore the potential of artificial intelligence-powered diagnostics in supporting early detection and global pandemic response. With careful optimization, deep learning models can address critical gaps in testing, particularly in settings constrained by limited resources or emerging variants.

A Framework for Guiding DDPM-Based Reconstruction of Damaged CT Projections Using Traditional Methods.

Zhang Z, Yang Y, Yang M, Guo H, Yang J, Shen X, Wang J

pubmed logopapersSep 26 2025
Denoising Diffusion Probabilistic Models (DDPM) have emerged as a promising generative framework for sample synthesis, yet their limitations in detail preservation hinder practical applications in computed tomography (CT) image reconstruction. To address these technical constraints and enhance reconstruction quality from compromised CT projection data, this study proposes the Projection Hybrid Inverse Reconstruction Framework (PHIRF) - a novel paradigm integrating conventional reconstruction methodologies with DDPM architecture. The framework implements a dual-phase approach: Initially, conventional CT reconstruction algorithms (e.g., Filtered back projection(FBP), Algebraic Reconstruction Technique(ART), Maximum-Likelihood Expectation Maximization (ML-EM)) are employed to generate preliminary reconstructions from incomplete projections, establishing low-dimensional feature representations. These features are subsequently parameterized and embedded as conditional constraints in the reverse diffusion process of DDPM, thereby guiding the generative model to synthesize enhanced tomographic images with improved structural fidelity. Comprehensive evaluations were conducted on three representative ill-posed projection scenarios: limited-angle projections, sparse-view acquisitions, and low-dose measurements. Experimental results demonstrate that PHIRF achieves state-of-the-art performance across all compromised data conditions, particularly in preserving fine anatomical details and suppressing reconstruction artifacts. Quantitative metrics and visual assessments confirm the framework's consistent superiority over existing deep learning-based reconstruction approaches, substantiating its adaptability to diverse projection degradation patterns. This hybrid architecture establishes a new paradigm for combining physical prior knowledge with data-driven generative models in medical image reconstruction tasks.

A novel deep neural architecture for efficient and scalable multidomain image classification.

Nobel SMN, Tasir MAM, Noor H, Monowar MM, Hamid MA, Sayeed MS, Islam MR, Mridha MF, Dey N

pubmed logopapersSep 26 2025
Deep learning has significantly advanced the field of computer vision; however, developing models that generalize effectively across diverse image domains remains a major research challenge. In this study, we introduce DeepFreqNet, a novel deep neural architecture specifically designed for high-performance multi-domain image classification. The innovative aspect of DeepFreqNet lies in its combination of three powerful components: multi-scale feature extraction for capturing patterns at different resolutions, depthwise separable convolutions for enhanced computational efficiency, and residual connections to maintain gradient flow and accelerate convergence. This hybrid design improves the architecture's ability to learn discriminative features and ensures scalability across domains with varying data complexities. Unlike traditional transfer learning models, DeepFreqNet adapts seamlessly to diverse datasets without requiring extensive reconfiguration. Experimental results from nine benchmark datasets, including MRI tumor classification, blood cell classification, and sign language recognition, demonstrate superior performance, achieving classification accuracies between 98.96% and 99.97%. These results highlight the effectiveness and versatility of DeepFreqNet, showcasing a significant improvement over existing state-of-the-art methods and establishing it as a robust solution for real-world image classification challenges.

Deep learning reconstruction for temporomandibular joint MRI: diagnostic interchangeability, image quality, and scan time reduction.

Jo GD, Jeon KJ, Choi YJ, Lee C, Han SS

pubmed logopapersSep 25 2025
To evaluate the diagnostic interchangeability, image quality, and scan time of deep learning (DL)-reconstructed magnetic resonance imaging (MRI) compared with conventional MRI for the temporomandibular joint (TMJ). Patients with suspected TMJ disorder underwent sagittal proton density-weighted (PDW) and T2-weighted fat-suppressed (T2W FS) MRI using both conventional and DL reconstruction protocols in a single session. Three oral radiologists independently assessed disc shape, disc position, and joint effusion. Diagnostic interchangeability for these findings was evaluated by comparing interobserver agreement, with equivalence defined as a 95% confidence interval (CI) within ±5%. Qualitative image quality (sharpness, noise, artifacts, overall) was rated on a 5-point scale. Quantitative image quality was assessed by measuring the signal-to-noise ratio (SNR) and contrast-to-noise ratio (CNR) in the condyle, disc, and background air. Image quality scores were compared using the Wilcoxon signed-rank test, and SNR/CNR using paired t-tests. Scan times were directly compared. A total of 176 TMJs from 88 patients (mean age, 37 ± 16 years; 43 men) were analyzed. DL-reconstructed MRI demonstrated diagnostic equivalence to conventional MRI for disc shape, position, and effusion (equivalence indices < 3%; 95% CIs within ±5%). DL reconstruction significantly reduced noise in PDW and T2W FS sequences (p < 0.05) while maintaining sharpness and artifact levels. SNR and CNR were significantly improved (p < 0.05), except for disc SNR in PDW (p = 0.189). Scan time was reduced by 49.2%. DL-reconstructed TMJ MRI is diagnostically interchangeable with conventional MRI, offering improved image quality with a shorter scan time. Question Long MRI scan times in patients with temporomandibular disorders can increase pain and motion-related artifacts, often compromising image quality in diagnostic settings. Findings DL reconstruction is diagnostically interchangeable with conventional MRI for assessing disc shape, disc position, and effusion, while improving image quality and reducing scan time. Clinical relevance DL reconstruction enables faster and more tolerable TMJ MRI workflows without compromising diagnostic accuracy, facilitating broader adoption in clinical settings where long scan times and motion artifacts often limit diagnostic efficiency.

Artificial intelligence applications in thyroid cancer care.

Pozdeyev N, White SL, Bell CC, Haugen BR, Thomas J

pubmed logopapersSep 25 2025
Artificial intelligence (AI) has created tremendous opportunities to improve thyroid cancer care. We used the "artificial intelligence thyroid cancer" query to search the PubMed database until May 31, 2025. We highlight a set of high-impact publications selected based on technical innovation, large generalizable training datasets, and independent and/or prospective validation of AI. We review the key applications of AI for diagnosing and managing thyroid cancer. Our primary focus is on using computer vision to evaluate thyroid nodules on thyroid ultrasound, an area of thyroid AI that has gained the most attention from researchers and will likely have a significant clinical impact. We also highlight AI for detecting and predicting thyroid cancer neck lymph node metastases, digital cyto- and histopathology, large language models for unstructured data analysis, patient education, and other clinical applications. We discuss how thyroid AI technology has evolved and cite the most impactful research studies. Finally, we balance our excitement about the potential of AI to improve clinical care for thyroid cancer with current limitations, such as the lack of high-quality, independent prospective validation of AI in clinical trials, the uncertain added value of AI software, unknown performance on non-papillary thyroid cancer types, and the complexity of clinical implementation. AI promises to improve thyroid cancer diagnosis, reduce healthcare costs and enable personalized management. High-quality, independent prospective validation of AI in clinical trials is lacking and is necessary for the clinical community's broad adoption of this technology.

Deep learning powered breast ultrasound to improve characterization of breast masses: a prospective study.

Singla V, Garg D, Negi S, Mehta N, Pallavi T, Choudhary S, Dhiman A

pubmed logopapersSep 25 2025
BackgroundThe diagnostic performance of ultrasound (US) is heavily reliant on the operator's expertise. Advances in artificial intelligence (AI) have introduced deep learning (DL) tools that detect morphology beyond human perception, providing automated interpretations.PurposeTo evaluate Smart-Detect (S-Detect), a DL tool, for its potential to enhance diagnostic precision and standardize US assessments among radiologists with varying levels of experience.Material and MethodsThis prospective observational study was conducted between May and November 2024. US and S-Detect analyses were performed by a breast imaging fellow. Images were independently analyzed by five radiologists with varying experience in breast imaging (<1 year-15 years). Each radiologist assessed the images twice: without and with S-Detect. ROC analyses compared the diagnostic performance. True downgrades and upgrades were calculated to determine the biopsy reduction with AI assistance. Kappa statistics assessed radiologist agreement before and after incorporating S-Detect.ResultsThis study analyzed 230 breast masses from 216 patients. S-Detect demonstrated high specificity (92.7%), PPV (92.9%), NPV (87.9%), and accuracy (90.4%). It enhanced less experienced radiologists' performance, increasing the sensitivity (85% to 93.33%), specificity (54.5% to 73.64%), and accuracy (70.43% to 83.91%; <i>P</i> <0.001). AUC significantly increased for the less experienced radiologists (0.698 to 0.835 <i>P</i> <0.001), with no significant gains for the expert radiologist. It also reduced variability in assessment between radiologists with an increase in kappa agreement (0.459-0.696) and enabled significant downgrades, reducing unnecessary biopsies.ConclusionThe DL tool improves diagnostic accuracy, bridges the expertise gap, reduces reliance on invasive procedures, and enhances consistency in clinical decisions among radiologists.

AI demonstrates comparable diagnostic performance to radiologists in MRI detection of anterior cruciate ligament tears: a systematic review and meta-analysis.

Gill SS, Haq T, Zhao Y, Ristic M, Amiras D, Gupte CM

pubmed logopapersSep 25 2025
Anterior cruciate ligament (ACL) injuries are among the most common knee injuries, affecting 1 in 3500 people annually. With rising rates of ACL tears, particularly in children, timely diagnosis is critical. This study evaluates artificial intelligence (AI) effectiveness in diagnosing and classifying ACL tears on MRI through a systematic review and meta-analysis, comparing AI performance with clinicians and assessing radiomic and non-radiomic models. Major databases were searched for AI models diagnosing ACL tears via MRIs. 36 studies, representing 52 models, were included. Accuracy, sensitivity, and specificity metrics were extracted. Pooled estimates were calculated using a random-effects model. Subgroup analyses compared MRI sequences, ground truths, AI versus clinician performance, and radiomic versus non-radiomic models. This study was conducted in line with Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) protocols. AI demonstrated strong diagnostic performance, with pooled accuracy, sensitivity, and specificity of 87.37%, 90.73%, and 91.34%, respectively. Classification models achieved pooled metrics of 90.46%, 88.68%, and 94.08%. Radiomic models outperformed non-radiomic models, and AI demonstrated comparable performance to clinicians in key metrics. Three-dimensional (3D) proton density fat suppression (PDFS) sequences with < 2 mm slice depth yielded the most promising results, despite small sample sizes, favouring arthroscopic benchmarks. Despite high heterogeneity (I² > 90%). AI models demonstrate diagnostic performance comparable to clinicians and may serve as valuable adjuncts in ACL tear detection, pending prospective validation. However, substantial heterogeneity and limited interpretability remain key challenges. Further research and standardised evaluation frameworks are needed to support clinical integration. Question Is AI effective and accurate in diagnosing and classifying anterior cruciate ligament (ACL) tears on MRI? Findings AI demonstrated high accuracy (87.37%), sensitivity (90.73%), and specificity (91.34%) in ACL tear diagnosis, matching or surpassing clinicians. Radiomic models outperformed non-radiomic approaches. Clinical relevance AI can enhance the accuracy of ACL tear diagnosis, reducing misdiagnoses and supporting clinicians, especially in resource-limited settings. Its integration into clinical workflows may streamline MRI interpretation, reduce diagnostic delays, and improve patient outcomes by optimising management.

Artificial Intelligence-Led Whole Coronary Artery OCT Analysis; Validation and Identification of Drug Efficacy and Higher-Risk Plaques.

Jessney B, Chen X, Gu S, Huang Y, Goddard M, Brown A, Obaid D, Mahmoudi M, Garcia Garcia HM, Hoole SP, Räber L, Prati F, Schönlieb CB, Roberts M, Bennett M

pubmed logopapersSep 25 2025
Intracoronary optical coherence tomography (OCT) can identify changes following drug/device treatment and high-risk plaques, but analysis requires expert clinician or core laboratory interpretation, while artifacts and limited sampling markedly impair reproducibility. Assistive technologies such as artificial intelligence-based analysis may therefore aid both detailed OCT interpretation and patient management. We determined if artificial intelligence-based OCT analysis (AutoOCT) can rapidly process, optimize and analyze OCT images, and identify plaque composition changes that predict drug success/failure and high-risk plaques. AutoOCT deep learning artificial intelligence modules were designed to correct segmentation errors from poor-quality or artifact-containing OCT images, identify tissue/plaque composition, classify plaque types, measure multiple parameters including lumen area, lipid and calcium arcs, and fibrous cap thickness, and output segmented images and clinically useful parameters. Model development used 36 212 frames (127 whole pullbacks, 106 patients). Internal validation of tissue and plaque classification and measurements used ex vivo OCT pullbacks from autopsy arteries, while external validation for plaque stabilization and identifying high-risk plaques used core laboratory analysis of IBIS-4 (Integrated Biomarkers and Imaging Study-4) high-intensity statin (83 patients) and CLIMA (Relationship Between Coronary Plaque Morphology of Left Anterior Descending Artery and Long-Term Clinical Outcome Study; 62 patients) studies, respectively. AutoOCT recovered images containing common artifacts with measurements and tissue and plaque classification accuracy of 83% versus histology, equivalent to expert clinician readers. AutoOCT replicated core laboratory plaque composition changes after high-intensity statin, including reduced lesion lipid arc (13.3° versus 12.5°) and increased minimum fibrous cap thickness (18.9 µm versus 24.4 µm). AutoOCT also identified high-risk plaque features leading to patient events including minimal lumen area <3.5 mm<sup>2</sup>, Lipid arc >180°, and fibrous cap thickness <75 µm, similar to the CLIMA core laboratory. AutoOCT-based analysis of whole coronary artery OCT identifies tissue and plaque types and measures features correlating with plaque stabilization and high-risk plaques. Artificial intelligence-based OCT analysis may augment clinician or core laboratory analysis of intracoronary OCT images for trials of drug/device efficacy and identifying high-risk lesions.

Acute myeloid leukemia classification using ReLViT and detection with YOLO enhanced by adversarial networks on bone marrow images.

Hameed M, Raja MAZ, Zameer A, Dar HS, Alluhaidan AS, Aziz R

pubmed logopapersSep 25 2025
Acute myeloid leukemia (AML) is recognized as a highly aggressive cancer that affects the bone marrow and blood, making it the most lethal type of leukemia. The detection of AML through medical imaging is challenging due to the complex structural and textural variations inherent in bone marrow images. These challenges are further intensified by the overlapping intensity between leukemia and non-leukemia regions, which reduces the effectiveness of traditional predictive models. This study presents a novel artificial intelligence framework that utilizes residual block merging vision transformers, convolutions, and advanced object detection techniques to address the complexities of bone marrow images and enhance the accuracy of AML detection. The framework integrates residual learning-based vision transformer (ReLViT) blocks within a bottleneck architecture, harnessing the combined strengths of residual learning and transformer mechanisms to improve feature representation and computational efficiency. Tailored data pre-processing strategies are employed to manage the textural and structural complexities associated with low-quality images and tumor shapes. The framework's performance is further optimized through a strategic weight-sharing technique to minimize computational overhead. Additionally, a generative adversarial network (GAN) is employed to enhance image quality across all AML imaging modalities, and when combined with a You Only Look Once (YOLO) object detector, it accurately localizes tumor formations in bone marrow images. Extensive and comparative evaluations have demonstrated the superiority of the proposed framework over existing deep convolutional neural networks (CNN) and object detection methods. The model achieves an F1-score of 99.15%, precision of 99.02%, and recall of 99.16%, marking a significant advancement in the field of medical imaging.
Page 1 of 83825 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.