Sort by:
Page 38 of 45441 results

A European Multi-Center Breast Cancer MRI Dataset

Gustav Müller-Franzes, Lorena Escudero Sánchez, Nicholas Payne, Alexandra Athanasiou, Michael Kalogeropoulos, Aitor Lopez, Alfredo Miguel Soro Busto, Julia Camps Herrero, Nika Rasoolzadeh, Tianyu Zhang, Ritse Mann, Debora Jutz, Maike Bode, Christiane Kuhl, Wouter Veldhuis, Oliver Lester Saldanha, JieFu Zhu, Jakob Nikolas Kather, Daniel Truhn, Fiona J. Gilbert

arxiv logopreprintMay 31 2025
Detecting breast cancer early is of the utmost importance to effectively treat the millions of women afflicted by breast cancer worldwide every year. Although mammography is the primary imaging modality for screening breast cancer, there is an increasing interest in adding magnetic resonance imaging (MRI) to screening programmes, particularly for women at high risk. Recent guidelines by the European Society of Breast Imaging (EUSOBI) recommended breast MRI as a supplemental screening tool for women with dense breast tissue. However, acquiring and reading MRI scans requires significantly more time from expert radiologists. This highlights the need to develop new automated methods to detect cancer accurately using MRI and Artificial Intelligence (AI), which have the potential to support radiologists in breast MRI interpretation and classification and help detect cancer earlier. For this reason, the ODELIA consortium has made this multi-centre dataset publicly available to assist in developing AI tools for the detection of breast cancer on MRI.

Diagnostic Accuracy of an Artificial Intelligence-based Platform in Detecting Periapical Radiolucencies on Cone-Beam Computed Tomography Scans of Molars.

Allihaibi M, Koller G, Mannocci F

pubmed logopapersMay 31 2025
This study aimed to evaluate the diagnostic performance of an artificial intelligence (AI)-based platform (Diagnocat) in detecting periapical radiolucencies (PARLs) in cone-beam computed tomography (CBCT) scans of molars. Specifically, we assessed Diagnocat's performance in detecting PARLs in non-root-filled molars and compared its diagnostic performance between preoperative and postoperative scans. This retrospective study analyzed preoperative and postoperative CBCT scans of 134 molars (327 roots). PARLs detected by Diagnocat were compared with assessments independently performed by two experienced endodontists, serving as the reference standard. Diagnostic performance was assessed at both tooth and root levels using sensitivity, specificity, accuracy, positive predictive value (PPV), negative predictive value (NPV), F1 score, and the area under the receiver operating characteristic curve (AUC-ROC). In preoperative scans of non-root-filled molars, Diagnocat demonstrated high sensitivity (teeth: 93.9%, roots: 86.2%), moderate specificity (teeth: 65.2%, roots: 79.9%), accuracy (teeth: 79.1%, roots: 82.6%), PPV (teeth: 71.8%, roots: 75.8%), NPV (teeth: 91.8%, roots: 88.8%), and F1 score (teeth: 81.3%, roots: 80.7%) for PARL detection. The AUC was 0.76 at the tooth level and 0.79 at the root level. Postoperative scans showed significantly lower PPV (teeth: 54.2%; roots: 46.9%) and F1 scores (teeth: 67.2%; roots: 59.2%). Diagnocat shows promise in detecting PARLs in CBCT scans of non-root-filled molars, demonstrating high sensitivity but moderate specificity, highlighting the need for human oversight to prevent overdiagnosis. However, diagnostic performance declined significantly in postoperative scans of root-filled molars. Further research is needed to optimize the platform's performance and support its integration into clinical practice. AI-based platforms such as Diagnocat can assist clinicians in detecting PARLs in CBCT scans, enhancing diagnostic efficiency and supporting decision-making. However, human expertise remains essential to minimize the risk of overdiagnosis and avoid unnecessary treatment.

The value of artificial intelligence in PSMA PET: a pathway to improved efficiency and results.

Dadgar H, Hong X, Karimzadeh R, Ibragimov B, Majidpour J, Arabi H, Al-Ibraheem A, Khalaf AN, Anwar FM, Marafi F, Haidar M, Jafari E, Zarei A, Assadi M

pubmed logopapersMay 30 2025
This systematic review investigates the potential of artificial intelligence (AI) in improving the accuracy and efficiency of prostate-specific membrane antigen positron emission tomography (PSMA PET) scans for detecting metastatic prostate cancer. A comprehensive literature search was conducted across Medline, Embase, and Web of Science, adhering to PRISMA guidelines. Key search terms included "artificial intelligence," "machine learning," "deep learning," "prostate cancer," and "PSMA PET." The PICO framework guided the selection of studies focusing on AI's application in evaluating PSMA PET scans for staging lymph node and distant metastasis in prostate cancer patients. Inclusion criteria prioritized original English-language articles published up to October 2024, excluding studies using non-PSMA radiotracers, those analyzing only the CT component of PSMA PET-CT, studies focusing solely on intra-prostatic lesions, and non-original research articles. The review included 22 studies, with a mix of prospective and retrospective designs. AI algorithms employed included machine learning (ML), deep learning (DL), and convolutional neural networks (CNNs). The studies explored various applications of AI, including improving diagnostic accuracy, sensitivity, differentiation from benign lesions, standardization of reporting, and predicting treatment response. Results showed high sensitivity (62% to 97%) and accuracy (AUC up to 98%) in detecting metastatic disease, but also significant variability in positive predictive value (39.2% to 66.8%). AI demonstrates significant promise in enhancing PSMA PET scan analysis for metastatic prostate cancer, offering improved efficiency and potentially better diagnostic accuracy. However, the variability in performance and the "black box" nature of some algorithms highlight the need for larger prospective studies, improved model interpretability, and the continued involvement of experienced nuclear medicine physicians in interpreting AI-assisted results. AI should be considered a valuable adjunct, not a replacement, for expert clinical judgment.

Assessing the value of artificial intelligence-based image analysis for pre-operative surgical planning of neck dissections and iENE detection in head and neck cancer patients.

Schmidl B, Hoch CC, Walter R, Wirth M, Wollenberg B, Hussain T

pubmed logopapersMay 30 2025
Accurate preoperative detection and analysis of lymph node metastasis (LNM) in head and neck squamous cell carcinoma (HNSCC) is essential for the surgical planning and execution of a neck dissection and may directly affect the morbidity and prognosis of patients. Additionally, predicting extranodal extension (ENE) using pre-operative imaging could be particularly valuable in oropharyngeal HPV-positive squamous cell carcinoma, enabling more accurate patient counseling, allowing the decision to favor primary chemoradiotherapy over immediate neck dissection when appropriate. Currently, radiological images are evaluated by radiologists and head and neck oncologists; and automated image interpretation is not part of the current standard of care. Therefore, the value of preoperative image recognition by artificial intelligence (AI) with the large language model (LLM) ChatGPT-4 V was evaluated in this exploratory study based on neck computed tomography (CT) images of HNSCC patients with cervical LNM, and corresponding images without LNM. The objective of this study was to firstly assess the preoperative rater accuracy by comparing clinician assessments of imaging-detected extranodal extension (iENE) and the extent of neck dissection to AI predictions, and secondly to evaluate the pathology-based accuracy by comparing AI predictions to final histopathological outcomes. 45 preoperative CT scans were retrospectively analyzed in this study: 15 cases in which a selective neck dissection (sND) was performed, 15 cases with ensuing radical neck dissection (mrND), and 15 cases without LNM (sND). Of note, image analysis was based on three single images provided to both ChatGPT-4 V and the head and neck surgeons as reviewers. Final pathological characteristics were available in all cases as HNSCC patients had undergone surgery. ChatGPT-4 V was tasked with providing the extent of LNM in the preoperative CT scans and with providing a recommendation for the extent of neck dissection and the detection of iENE. The diagnostic performance of ChatGPT-4 V was reviewed independently by two head and neck surgeons with its accuracy, sensitivity, and specificity being assessed. In this study, ChatGPT-4 V reached a sensitivity of 100% and a specificity of 34.09% in identifying the need for a radical neck dissection based on neck CT images. The sensitivity and specificity of detecting iENE was 100% and 34.15%, respectively. Both human reviewers achieved higher specificity. Notably, ChatGPT-4 V also recommended a mrND and detected iENE on CT images without any cervical LNM. In this exploratory study of 45 preoperative CT Neck scans before a neck dissection, ChatGPT-4 V substantially overestimated the degree and severity of lymph node metastasis in head and neck cancer. While these results suggest that ChatGPT-4 V may not yet be a tool providing added value for surgical planning in head and neck cancer, the unparalleled speed of analysis and well-founded reasoning provided suggests that AI tools may provide added value in the future.

HVAngleEst: A Dataset for End-to-end Automated Hallux Valgus Angle Measurement from X-Ray Images.

Wang Q, Ji D, Wang J, Liu L, Yang X, Zhang Y, Liang J, Liu P, Zhao H

pubmed logopapersMay 30 2025
Accurate measurement of hallux valgus angle (HVA) and intermetatarsal angle (IMA) is essential for diagnosing hallux valgus and determining appropriate treatment strategies. Traditional manual measurement methods, while standardized, are time-consuming, labor-intensive, and subject to evaluator bias. Recent advancements in deep learning have been applied to hallux valgus angle estimation, but the development of effective algorithms requires large, well-annotated datasets. Existing X-ray datasets are typically limited to cropped foot regions images, and only one dataset containing very few samples is publicly available. To address these challenges, we introduce HVAngleEst, the first large-scale, open-access dataset specifically designed for hallux valgus angle estimation. HVAngleEst comprises 1,382 X-ray images from 1,150 patients and includes comprehensive annotations, such as foot localization, hallux valgus angles, and line segments for each phalanx. This dataset enables fully automated, end-to-end hallux valgus angle estimation, reducing manual labor and eliminating evaluator bias.

Federated Foundation Model for GI Endoscopy Images

Alina Devkota, Annahita Amireskandari, Joel Palko, Shyam Thakkar, Donald Adjeroh, Xiajun Jiang, Binod Bhattarai, Prashnna K. Gyawali

arxiv logopreprintMay 30 2025
Gastrointestinal (GI) endoscopy is essential in identifying GI tract abnormalities in order to detect diseases in their early stages and improve patient outcomes. Although deep learning has shown success in supporting GI diagnostics and decision-making, these models require curated datasets with labels that are expensive to acquire. Foundation models offer a promising solution by learning general-purpose representations, which can be finetuned for specific tasks, overcoming data scarcity. Developing foundation models for medical imaging holds significant potential, but the sensitive and protected nature of medical data presents unique challenges. Foundation model training typically requires extensive datasets, and while hospitals generate large volumes of data, privacy restrictions prevent direct data sharing, making foundation model training infeasible in most scenarios. In this work, we propose a FL framework for training foundation models for gastroendoscopy imaging, enabling data to remain within local hospital environments while contributing to a shared model. We explore several established FL algorithms, assessing their suitability for training foundation models without relying on task-specific labels, conducting experiments in both homogeneous and heterogeneous settings. We evaluate the trained foundation model on three critical downstream tasks--classification, detection, and segmentation--and demonstrate that it achieves improved performance across all tasks, highlighting the effectiveness of our approach in a federated, privacy-preserving setting.

Real-time brain tumor detection in intraoperative ultrasound: From model training to deployment in the operating room.

Cepeda S, Esteban-Sinovas O, Romero R, Singh V, Shett P, Moiyadi A, Zemmoura I, Giammalva GR, Del Bene M, Barbotti A, DiMeco F, West TR, Nahed BV, Arrese I, Hornero R, Sarabia R

pubmed logopapersMay 30 2025
Intraoperative ultrasound (ioUS) is a valuable tool in brain tumor surgery due to its versatility, affordability, and seamless integration into the surgical workflow. However, its adoption remains limited, primarily because of the challenges associated with image interpretation and the steep learning curve required for effective use. This study aimed to enhance the interpretability of ioUS images by developing a real-time brain tumor detection system deployable in the operating room. We collected 2D ioUS images from the BraTioUS and ReMIND datasets, annotated with expert-refined tumor labels. Using the YOLO11 architecture and its variants, we trained object detection models to identify brain tumors. The dataset included 1732 images from 192 patients, divided into training, validation, and test sets. Data augmentation expanded the training set to 11,570 images. In the test dataset, YOLO11s achieved the best balance of precision and computational efficiency, with a mAP@50 of 0.95, mAP@50-95 of 0.65, and a processing speed of 34.16 frames per second. The proposed solution was prospectively validated in a cohort of 20 consecutively operated patients diagnosed with brain tumors. Neurosurgeons confirmed its seamless integration into the surgical workflow, with real-time predictions accurately delineating tumor regions. These findings highlight the potential of real-time object detection algorithms to enhance ioUS-guided brain tumor surgery, addressing key challenges in interpretation and providing a foundation for future development of computer vision-based tools for neuro-oncological surgery.

Deep Learning-Based Breast Cancer Detection in Mammography: A Multi-Center Validation Study in Thai Population

Isarun Chamveha, Supphanut Chaiyungyuen, Sasinun Worakriangkrai, Nattawadee Prasawang, Warasinee Chaisangmongkon, Pornpim Korpraphong, Voraparee Suvannarerg, Shanigarn Thiravit, Chalermdej Kannawat, Kewalin Rungsinaporn, Suwara Issaragrisil, Payia Chadbunchachai, Pattiya Gatechumpol, Chawiporn Muktabhant, Patarachai Sereerat

arxiv logopreprintMay 29 2025
This study presents a deep learning system for breast cancer detection in mammography, developed using a modified EfficientNetV2 architecture with enhanced attention mechanisms. The model was trained on mammograms from a major Thai medical center and validated on three distinct datasets: an in-domain test set (9,421 cases), a biopsy-confirmed set (883 cases), and an out-of-domain generalizability set (761 cases) collected from two different hospitals. For cancer detection, the model achieved AUROCs of 0.89, 0.96, and 0.94 on the respective datasets. The system's lesion localization capability, evaluated using metrics including Lesion Localization Fraction (LLF) and Non-Lesion Localization Fraction (NLF), demonstrated robust performance in identifying suspicious regions. Clinical validation through concordance tests showed strong agreement with radiologists: 83.5% classification and 84.0% localization concordance for biopsy-confirmed cases, and 78.1% classification and 79.6% localization concordance for out-of-domain cases. Expert radiologists' acceptance rate also averaged 96.7% for biopsy-confirmed cases, and 89.3% for out-of-domain cases. The system achieved a System Usability Scale score of 74.17 for source hospital, and 69.20 for validation hospitals, indicating good clinical acceptance. These results demonstrate the model's effectiveness in assisting mammogram interpretation, with the potential to enhance breast cancer screening workflows in clinical practice.

Deep learning reconstruction enhances tophus detection in a dual-energy CT phantom study.

Schmolke SA, Diekhoff T, Mews J, Khayata K, Kotlyarov M

pubmed logopapersMay 28 2025
This study aimed to compare two deep learning reconstruction (DLR) techniques (AiCE mild; AiCE strong) with two established methods-iterative reconstruction (IR) and filtered back projection (FBP)-for the detection of monosodium urate (MSU) in dual-energy computed tomography (DECT). An ex vivo bio-phantom and a raster phantom were prepared by inserting syringes containing different MSU concentrations and scanned in a 320-rows volume DECT scanner at different tube currents. The scans were reconstructed in a soft tissue kernel using the four reconstruction techniques mentioned above, followed by quantitative assessment of MSU volumes and image quality parameters, i.e., signal-to-noise ratio (SNR) and contrast-to-noise ratio (CNR). Both DLR techniques outperformed conventional IR and FBP in terms of volume detection and image quality. Notably, unlike IR and FBP, the two DLR methods showed no positive correlation of the MSU detection rate with the CT dose index (CTDIvol) in the bio-phantom. Our study highlights the potential of DLR for DECT imaging in gout, where it offers enhanced detection sensitivity, improved image contrast, reduced image noise, and lower radiation exposure. Further research is needed to assess the clinical reliability of this approach.
Page 38 of 45441 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.