Sort by:
Page 31 of 45442 results

FDTooth: Intraoral Photographs and CBCT Images for Fenestration and Dehiscence Detection.

Liu K, Elbatel M, Chu G, Shan Z, Sum FHKMH, Hung KF, Zhang C, Li X, Yang Y

pubmed logopapersJun 14 2025
Fenestration and dehiscence (FD) pose significant challenges in dental treatments as they adversely affect oral health. Although cone-beam computed tomography (CBCT) provides precise diagnostics, its extensive time requirements and radiation exposure limit its routine use for monitoring. Currently, there is no public dataset that combines intraoral photographs and corresponding CBCT images; this limits the development of deep learning algorithms for the automated detection of FD and other potential diseases. In this paper, we present FDTooth, a dataset that includes both intraoral photographs and CBCT images of 241 patients aged between 9 and 55 years. FDTooth contains 1,800 precise bounding boxes annotated on intraoral photographs, with gold-standard ground truth extracted from CBCT. We developed a baseline model for automated FD detection in intraoral photographs. The developed dataset and model can serve as valuable resources for research on interdisciplinary dental diagnostics, offering clinicians a non-invasive, efficient method for early FD screening without invasive procedures.

Recent Advances in sMRI and Artificial Intelligence for Presurgical Planning in Focal Cortical Dysplasia: A Systematic Review.

Mahmoudi A, Alizadeh A, Ganji Z, Zare H

pubmed logopapersJun 13 2025
Focal Cortical Dysplasia (FCD) is a leading cause of drug-resistant epilepsy, particularly in children and young adults, necessitating precise presurgical planning. Traditional structural MRI often fails to detect subtle FCD lesions, especially in MRI-negative cases. Recent advancements in Artificial Intelligence (AI), particularly Machine Learning (ML) and Deep Learning (DL), have the potential to enhance FCD detection's sensitivity and specificity. This systematic review, following PRISMA guidelines, searched PubMed, Embase, Scopus, Web of Science, and Science Direct for articles published from 2020 onwards, using keywords related to "Focal Cortical Dysplasia," "MRI," and "Artificial Intelligence/Machine Learning/Deep Learning." Included were original studies employing AI and structural MRI (sMRI) for FCD detection in humans, reporting quantitative performance metrics, and published in English. Data extraction was performed independently by two reviewers, with discrepancies resolved by a third. The included studies demonstrated that AI significantly improved FCD detection, achieving sensitivity up to 97.1% and specificities up to 84.3% across various MRI sequences, including MPRAGE, MP2RAGE, and FLAIR. AI models, particularly deep learning models, matched or surpassed human radiologist performance, with combined AI-human expertise reaching up to 87% detection rates. Among 88 full-text articles reviewed, 27 met inclusion criteria. The studies emphasized the importance of advanced MRI sequences and multimodal MRI for enhanced detection, though model performance varied with FCD type and training datasets. Recent advances in sMRI and AI, especially deep learning, offer substantial potential to improve FCD detection, leading to better presurgical planning and patient outcomes in drug-resistant epilepsy. These methods enable faster, more accurate, and automated FCD detection, potentially enhancing surgical decision-making. Further clinical validation and optimization of AI algorithms across diverse datasets are essential for broader clinical translation.

Conditional diffusion models for guided anomaly detection in brain images using fluid-driven anomaly randomization

Ana Lawry Aguila, Peirong Liu, Oula Puonti, Juan Eugenio Iglesias

arxiv logopreprintJun 11 2025
Supervised machine learning has enabled accurate pathology detection in brain MRI, but requires training data from diseased subjects that may not be readily available in some scenarios, for example, in the case of rare diseases. Reconstruction-based unsupervised anomaly detection, in particular using diffusion models, has gained popularity in the medical field as it allows for training on healthy images alone, eliminating the need for large disease-specific cohorts. These methods assume that a model trained on normal data cannot accurately represent or reconstruct anomalies. However, this assumption often fails with models failing to reconstruct healthy tissue or accurately reconstruct abnormal regions i.e., failing to remove anomalies. In this work, we introduce a novel conditional diffusion model framework for anomaly detection and healthy image reconstruction in brain MRI. Our weakly supervised approach integrates synthetically generated pseudo-pathology images into the modeling process to better guide the reconstruction of healthy images. To generate these pseudo-pathologies, we apply fluid-driven anomaly randomization to augment real pathology segmentation maps from an auxiliary dataset, ensuring that the synthetic anomalies are both realistic and anatomically coherent. We evaluate our model's ability to detect pathology, using both synthetic anomaly datasets and real pathology from the ATLAS dataset. In our extensive experiments, our model: (i) consistently outperforms variational autoencoders, and conditional and unconditional latent diffusion; and (ii) surpasses on most datasets, the performance of supervised inpainting methods with access to paired diseased/healthy images.

Real-World Diagnostic Performance and Clinical Utility of Artificial-Intelligence-Assisted Interpretation for Detection of Lung Metastasis on CT in Patients With Colorectal Cancer.

Jang S, Kim J, Lee JS, Jeong Y, Nam JG, Kim J, Lee KW

pubmed logopapersJun 11 2025
<b>Background:</b> Studies of artificial intelligence (AI) for lung nodule detection on CT have primarily been conducted in investigational settings and/or focused on lung cancer screening. <b>Objective:</b> To evaluate the impact of AI assistance on radiologists' diagnostic performance for detecting lung metastases on chest CT in patients with colorectal cancer (CRC) in real-world clinical practice and to assess the clinical utility of AI assistance in this setting. <b>Methods:</b> This retrospective study included patients with CRC who underwent chest CT as surveillance for lung metastasis from May 2020 to December 2020 (conventional interpretation) or May 2022 to December 2022 (AI-assisted interpretation). Between periods, the institution implemented a commercial AI lung nodule detection system. During the second period, radiologists interpreted examinations concurrently with AI-generated reports, using clinical judgment regarding whether to report AI-detected nodules. The reference standard for metastasis incorporated pathologic and clinical follow-up criteria. Diagnostic performance (sensitivity, specificity, accuracy), and clinical utility (diagnostic yield, false-referral rate, management changes after positive reports) were compared between groups based on clinical radiology reports. Net benefit was estimated using decision curve analysis equation. Standalone AI interpretation was evaluated. <b>Results:</b> The conventional interpretation group included 647 patients (mean age, 64±11 years; 394 men, 253 women; metastasis prevalence, 4.3%); AI-assisted interpretation group included 663 patients (mean age, 63±12 years; 381 men, 282 women; metastasis prevalence, 4.4%). The AI-assisted interpretation group compared with the conventional interpretation group showed higher sensitivity (72.4% vs 32.1%; p=.008), accuracy (98.5% vs 96.0%; p=.005), and frequency of management changes (55.2% vs 25.0%, p=.02), without significant difference in specificity (99.7% vs 98.9%; p=.11), diagnostic yield (3.2% vs 1.4%, p=.30) or false-referral rate (0.3% vs 1.1%, p=.10). AI-assisted interpretation had positive estimated net benefit across outcome ratios. Standalone AI correctly detected metastasis in 24 of 29 patients but had 381 false-positive detections in 634 patients without metastasis; only one AI false-positive was reported as positive by interpretating radiologists. <b>Conclusion:</b> AI assistance yielded increased sensitivity, accuracy, and frequency of management changes, without significantly changed specificity. False-positive AI results minimally impacted radiologists' interpretations. <b>Clinical Impact:</b> The findings support clinical utility of AI assistance for CRC metastasis surveillance.

PatchGuard: Adversarially Robust Anomaly Detection and Localization through Vision Transformers and Pseudo Anomalies

Mojtaba Nafez, Amirhossein Koochakian, Arad Maleki, Jafar Habibi, Mohammad Hossein Rohban

arxiv logopreprintJun 10 2025
Anomaly Detection (AD) and Anomaly Localization (AL) are crucial in fields that demand high reliability, such as medical imaging and industrial monitoring. However, current AD and AL approaches are often susceptible to adversarial attacks due to limitations in training data, which typically include only normal, unlabeled samples. This study introduces PatchGuard, an adversarially robust AD and AL method that incorporates pseudo anomalies with localization masks within a Vision Transformer (ViT)-based architecture to address these vulnerabilities. We begin by examining the essential properties of pseudo anomalies, and follow it by providing theoretical insights into the attention mechanisms required to enhance the adversarial robustness of AD and AL systems. We then present our approach, which leverages Foreground-Aware Pseudo-Anomalies to overcome the deficiencies of previous anomaly-aware methods. Our method incorporates these crafted pseudo-anomaly samples into a ViT-based framework, with adversarial training guided by a novel loss function designed to improve model robustness, as supported by our theoretical analysis. Experimental results on well-established industrial and medical datasets demonstrate that PatchGuard significantly outperforms previous methods in adversarial settings, achieving performance gains of $53.2\%$ in AD and $68.5\%$ in AL, while also maintaining competitive accuracy in non-adversarial settings. The code repository is available at https://github.com/rohban-lab/PatchGuard .

Automated Whole-Brain Focal Cortical Dysplasia Detection Using MR Fingerprinting With Deep Learning.

Ding Z, Morris S, Hu S, Su TY, Choi JY, Blümcke I, Wang X, Sakaie K, Murakami H, Alexopoulos AV, Jones SE, Najm IM, Ma D, Wang ZI

pubmed logopapersJun 10 2025
Focal cortical dysplasia (FCD) is a common pathology for pharmacoresistant focal epilepsy, yet detection of FCD on clinical MRI is challenging. Magnetic resonance fingerprinting (MRF) is a novel quantitative imaging technique providing fast and reliable tissue property measurements. The aim of this study was to develop an MRF-based deep-learning (DL) framework for whole-brain FCD detection. We included patients with pharmacoresistant focal epilepsy and pathologically/radiologically diagnosed FCD, as well as age-matched and sex-matched healthy controls (HCs). All participants underwent 3D whole-brain MRF and clinical MRI scans. T1, T2, gray matter (GM), and white matter (WM) tissue fraction maps were reconstructed from a dictionary-matching algorithm based on the MRF acquisition. A 3D ROI was manually created for each lesion. All MRF maps and lesion labels were registered to the Montreal Neurological Institute space. Mean and SD T1 and T2 maps were calculated voxel-wise across using HC data. T1 and T2 <i>z</i>-score maps for each patient were generated by subtracting the mean HC map and dividing by the SD HC map. MRF-based morphometric maps were produced in the same manner as in the morphometric analysis program (MAP), based on MRF GM and WM maps. A no-new U-Net model was trained using various input combinations, with performance evaluated through leave-one-patient-out cross-validation. We compared model performance using various input combinations from clinical MRI and MRF to assess the impact of different input types on model effectiveness. We included 40 patients with FCD (mean age 28.1 years, 47.5% female; 11 with FCD IIa, 14 with IIb, 12 with mMCD, 3 with MOGHE) and 67 HCs. The DL model with optimal performance used all MRF-based inputs, including MRF-synthesized T1w, T1z, and T2z maps; tissue fraction maps; and morphometric maps. The patient-level sensitivity was 80% with an average of 1.7 false positives (FPs) per patient. Sensitivity was consistent across subtypes, lobar locations, and lesional/nonlesional clinical MRI. Models using clinical images showed lower sensitivity and higher FPs. The MRF-DL model also outperformed the established MAP18 pipeline in sensitivity, FPs, and lesion label overlap. The MRF-DL framework demonstrated efficacy for whole-brain FCD detection. Multiparametric MRF features from a single scan offer promising inputs for developing a deep-learning tool capable of detecting subtle epileptic lesions.

Transformer-based robotic ultrasound 3D tracking for capsule robot in GI tract.

Liu X, He C, Wu M, Ping A, Zavodni A, Matsuura N, Diller E

pubmed logopapersJun 9 2025
Ultrasound (US) imaging is a promising modality for real-time monitoring of robotic capsule endoscopes navigating through the gastrointestinal (GI) tract. It offers high temporal resolution and safety but is limited by a narrow field of view, low visibility in gas-filled regions and challenges in detecting out-of-plane motions. This work addresses these issues by proposing a novel robotic ultrasound tracking system capable of long-distance 3D tracking and active re-localization when the capsule is lost due to motion or artifacts. We develop a hybrid deep learning-based tracking framework combining convolutional neural networks (CNNs) and a transformer backbone. The CNN component efficiently encodes spatial features, while the transformer captures long-range contextual dependencies in B-mode US images. This model is integrated with a robotic arm that adaptively scans and tracks the capsule. The system's performance is evaluated using ex vivo colon phantoms under varying imaging conditions, with physical perturbations introduced to simulate realistic clinical scenarios. The proposed system achieved continuous 3D tracking over distances exceeding 90 cm, with a mean centroid localization error of 1.5 mm and over 90% detection accuracy. We demonstrated 3D tracking in a more complex workspace featuring two curved sections to simulate anatomical challenges. This suggests the strong resilience of the tracking system to motion-induced artifacts and geometric variability. The system maintained real-time tracking at 9-12 FPS and successfully re-localized the capsule within seconds after tracking loss, even under gas artifacts and acoustic shadowing. This study presents a hybrid CNN-transformer system for automatic, real-time 3D ultrasound tracking of capsule robots over long distances. The method reliably handles occlusions, view loss and image artifacts, offering millimeter-level tracking accuracy. It significantly reduces clinical workload through autonomous detection and re-localization. Future work includes improving probe-tissue interaction handling and validating performance in live animal and human trials to assess physiological impacts.

Automated Vessel Occlusion Software in Acute Ischemic Stroke: Pearls and Pitfalls.

Aziz YN, Sriwastwa A, Nael K, Harker P, Mistry EA, Khatri P, Chatterjee AR, Heit JJ, Jadhav A, Yedavalli V, Vagal AS

pubmed logopapersJun 9 2025
Software programs leveraging artificial intelligence to detect vessel occlusions are now widely available to aid in stroke triage. Given their proprietary use, there is a surprising lack of information regarding how the software works, who is using the software, and their performance in an unbiased real-world setting. In this educational review of automated vessel occlusion software, we discuss emerging evidence of their utility, underlying algorithms, real-world diagnostic performance, and limitations. The intended audience includes specialists in stroke care in neurology, emergency medicine, radiology, and neurosurgery. Practical tips for onboarding and utilization of this technology are provided based on the multidisciplinary experience of the authorship team.

Addressing Limited Generalizability in Artificial Intelligence-Based Brain Aneurysm Detection for Computed Tomography Angiography: Development of an Externally Validated Artificial Intelligence Screening Platform.

Pettersson SD, Filo J, Liaw P, Skrzypkowska P, Klepinowski T, Szmuda T, Fodor TB, Ramirez-Velandia F, Zieliński P, Chang YM, Taussky P, Ogilvy CS

pubmed logopapersJun 9 2025
Brain aneurysm detection models, both in the literature and in industry, continue to lack generalizability during external validation, limiting clinical adoption. This challenge is largely due to extensive exclusion criteria during training data selection. The authors developed the first model to achieve generalizability using novel methodological approaches. Computed tomography angiography (CTA) scans from 2004 to 2023 at the study institution were used for model training, including untreated unruptured intracranial aneurysms without extensive cerebrovascular disease. External validation used digital subtraction angiography-verified CTAs from an international center, while prospective validation occurred at the internal institution over 9 months. A public web platform was created for further model validation. A total of 2194 CTA scans were used for this study. One thousand five hundred eighty-seven patients and 1920 aneurysms with a mean size of 5.3 ± 3.7 mm were included in the training cohort. The mean age of the patients was 69.7 ± 14.9 years, and 1203 (75.8%) were female. The model achieved a training Dice score of 0.88 and a validation Dice score of 0.76. Prospective internal validation on 304 scans yielded a lesion-level (LL) sensitivity of 82.5% (95% CI: 75.5-87.9) and specificity of 89.6 (95% CI: 84.5-93.2). External validation on 303 scans demonstrated an on-par LL sensitivity and specificity of 83.5% (95% CI: 75.1-89.4) and 92.9% (95% CI: 88.8-95.6), respectively. Radiologist LL sensitivity from the external center was 84.5% (95% CI: 76.2-90.2), and 87.5% of the missed aneurysms were detected by the model. The authors developed the first publicly testable artificial intelligence model for aneurysm detection on CTA scans, demonstrating generalizability and state-of-the-art performance in external validation. The model addresses key limitations of previous efforts and enables broader validation through a web-based platform.

Integration of artificial intelligence into cardiac ultrasonography practice.

Shaulian SY, Gala D, Makaryus AN

pubmed logopapersJun 9 2025
Over the last several decades, echocardiography has made numerous technological advancements, with one of the most significant being the integration of artificial intelligence (AI). AI algorithms assist novice operators to acquire diagnostic-quality images and automate complex analyses. This review explores the integration of AI into various echocardiographic modalities, including transthoracic, transesophageal, intracardiac, and point-of-care ultrasound. It examines how AI enhances image acquisition, streamlines analysis, and improves diagnostic performance across routine, critical care, and complex cardiac imaging. To conduct this review, PubMed was searched using targeted keywords aligned with each section of the paper, focusing primarily on peer-reviewed articles published from 2020 onward. Earlier studies were included when foundational or frequently cited. The findings were organized thematically to highlight clinical relevance and practical applications. Challenges persist in clinical application, including algorithmic bias, ethical concerns, and the need for clinician training and AI oversight. Despite these, AI's potential to revolutionize cardiovascular care through precision and accessibility remains unparalleled, with benefits likely to far outweigh obstacles if appropriately applied and implemented in cardiac ultrasonography.
Page 31 of 45442 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.