Sort by:
Page 309 of 3323316 results

A multi-layered defense against adversarial attacks in brain tumor classification using ensemble adversarial training and feature squeezing.

Yinusa A, Faezipour M

pubmed logopapersMay 14 2025
Deep learning, particularly convolutional neural networks (CNNs), has proven valuable for brain tumor classification, aiding diagnostic and therapeutic decisions in medical imaging. Despite their accuracy, these models are vulnerable to adversarial attacks, compromising their reliability in clinical settings. In this research, we utilized a VGG16-based CNN model to classify brain tumors, achieving 96% accuracy on clean magnetic resonance imaging (MRI) data. To assess robustness, we exposed the model to Fast Gradient Sign Method (FGSM) and Projected Gradient Descent (PGD) attacks, which reduced accuracy to 32% and 13%, respectively. We then applied a multi-layered defense strategy, including adversarial training with FGSM and PGD examples and feature squeezing techniques such as bit-depth reduction and Gaussian blurring. This approach improved model resilience, achieving 54% accuracy on FGSM and 47% on PGD adversarial examples. Our results highlight the importance of proactive defense strategies for maintaining the reliability of AI in medical imaging under adversarial conditions.

Assessing artificial intelligence in breast screening with stratified results on 306 839 mammograms across geographic regions, age, breast density and ethnicity: A Retrospective Investigation Evaluating Screening (ARIES) study.

Oberije CJG, Currie R, Leaver A, Redman A, Teh W, Sharma N, Fox G, Glocker B, Khara G, Nash J, Ng AY, Kecskemethy PD

pubmed logopapersMay 14 2025
Evaluate an Artificial Intelligence (AI) system in breast screening through stratified results across age, breast density, ethnicity and screening centres, from different UK regions. A large-scale retrospective study evaluating two variations of using AI as an independent second reader in double reading was executed. Stratifications were conducted for clinical and operational metrics. Data from 306 839 mammography cases screened between 2017 and 2021 were used and included three different UK regions.The impact on safety and effectiveness was assessed using clinical metrics: cancer detection rate and positive predictive value, stratified according to age, breast density and ethnicity. Operational impact was assessed through reading workload and recall rate, measured overall and per centre.Non-inferiority was tested for AI workflows compared with human double reading, and when passed, superiority was tested. AI interval cancer (IC) flag rate was assessed to estimate additional cancer detection opportunity with AI that cannot be assessed retrospectively. The AI workflows passed non-inferiority or superiority tests for every metric across all subgroups, with workload savings between 38.3% and 43.7%. The AI standalone flagged 41.2% of ICs overall, ranging between 33.3% and 46.8% across subgroups, with the highest detection rate for dense breasts. Human double reading and AI workflows showed the same performance disparities across subgroups. The AI integrations maintained or improved performance at all metrics for all subgroups while achieving significant workload reduction. Moreover, complementing these integrations with AI as an additional reader can improve cancer detection. The granularity of assessment showed that screening with the AI-system integrations was as safe as standard double reading across heterogeneous populations.

Evaluation of an artificial intelligence noise reduction tool for conventional X-ray imaging - a visual grading study of pediatric chest examinations at different radiation dose levels using anthropomorphic phantoms.

Hultenmo M, Pernbro J, Ahlin J, Bonnier M, Båth M

pubmed logopapersMay 13 2025
Noise reduction tools developed with artificial intelligence (AI) may be implemented to improve image quality and reduce radiation dose, which is of special interest in the more radiosensitive pediatric population. The aim of the present study was to examine the effect of the AI-based intelligent noise reduction (INR) on image quality at different dose levels in pediatric chest radiography. Anteroposterior and lateral images of two anthropomorphic phantoms were acquired with both standard noise reduction and INR at different dose levels. In total, 300 anteroposterior and 420 lateral images were included. Image quality was evaluated by three experienced pediatric radiologists. Gradings were analyzed with visual grading characteristics (VGC) resulting in area under the VGC curve (AUC<sub>VGC</sub>) values and associated confidence intervals (CI). Image quality of different anatomical structures and overall clinical image quality were statistically significantly better in the anteroposterior INR images than in the corresponding standard noise reduced images at each dose level. Compared with reference anteroposterior images at a dose level of 100% with standard noise reduction, the image quality of the anteroposterior INR images was graded as significantly better at dose levels of ≥ 80%. Statistical significance was also achieved at lower dose levels for some structures. The assessments of the lateral images showed similar trends but with fewer significant results. The results of the present study indicate that the AI-based INR may potentially be used to improve image quality at a specific dose level or to reduce dose and maintain the image quality in pediatric chest radiography.

Enhancing Liver Fibrosis Measurement: Deep Learning and Uncertainty Analysis Across Multi-Centre Cohorts

Wojciechowska, M. K., Malacrino, S., Windell, D., Culver, E., Dyson, J., UK-AIH Consortium,, Rittscher, J.

medrxiv logopreprintMay 13 2025
O_FIG O_LINKSMALLFIG WIDTH=200 HEIGHT=111 SRC="FIGDIR/small/25326981v1_ufig1.gif" ALT="Figure 1"> View larger version (31K): [email protected]@14e7b87org.highwire.dtl.DTLVardef@19005c4org.highwire.dtl.DTLVardef@6ac42f_HPS_FORMAT_FIGEXP M_FIG O_FLOATNOGraphical AbstractC_FLOATNO C_FIG HighlightsO_LIA retrospective cohort of liver biopsies collected from over 20 healthcare centres has been assembled. C_LIO_LIThe cohort is characterized on the basis of collagen staining used for liver fibrosis assessment. C_LIO_LIA computational pipeline for the quantification of collagen from liver histology slides has been developed and applied to the described cohorts. C_LIO_LIUncertainty estimation is evaluated as a method to build trust in deep-learning based collagen predictions. C_LI The introduction of digital pathology has revolutionised the way in which histology-based measurements can support large, multi-centre studies. How-ever, pooling data from various centres often reveals significant differences in specimen quality, particularly regarding histological staining protocols. These variations present challenges in reliably quantifying features from stained tissue sections using image analysis. In this study, we investigate the statistical variation of measuring fibrosis across a liver cohort composed of four individual studies from 20 clinical sites across Europe and North America. In a first step, we apply colour consistency measurements to analyse staining variability across this diverse cohort. Subsequently, a learnt segmentation model is used to quantify the collagen proportionate area (CPA) and employed uncertainty mapping to evaluate the quality of the segmentations. Our analysis highlights a lack of standardisation in PicroSirius Red (PSR) staining practices, revealing significant variability in staining protocols across institutions. The deconvolution of the staining of the digitised slides identified the different numbers and types of counterstains used, leading to potentially incomparable results. Our analysis highlights the need for standardised staining protocols to ensure reliable collagen quantification in liver biopsies. The tools and methodologies presented here can be applied to perform slide colour quality control in digital pathology studies, thus enhancing the comparability and reproducibility of fibrosis assessment in the liver and other tissues.

Diagnosis of thyroid cartilage invasion by laryngeal and hypopharyngeal cancers based on CT with deep learning.

Takano Y, Fujima N, Nakagawa J, Dobashi H, Shimizu Y, Kanaya M, Kano S, Homma A, Kudo K

pubmed logopapersMay 13 2025
To develop a convolutional neural network (CNN) model to diagnose thyroid cartilage invasion by laryngeal and hypopharyngeal cancers observed on computed tomography (CT) images and evaluate the model's diagnostic performance. We retrospectively analyzed 91 cases of laryngeal or hypopharyngeal cancer treated surgically at our hospital during the period April 2010 through May 2023, and we divided the cases into datasets for training (n = 61) and testing (n = 30). We reviewed the CT images and pathological diagnoses in all cases to determine the invasion positive- or negative-status as a ground truth. We trained the new CNN model to classify thyroid cartilage invasion-positive or -negative status from the pre-treatment axial CT images by transfer learning from Residual Network 101 (ResNet101), using the training dataset. We then used the test dataset to evaluate the model's performance. Two radiologists, one with extensive head and neck imaging experience (senior reader) and the other with less experience (junior reader) reviewed the CT images of the test dataset to determine whether thyroid cartilage invasion was present. The following were obtained by the CNN model with the test dataset: area under the curve (AUC), 0.82; 90 % accuracy, 80 % sensitivity, and 95 % specificity. The CNN model showed a significant difference in AUCs compared to the junior reader (p = 0.035) but not the senior reader (p = 0.61). The CNN-based diagnostic model can be a useful supportive tool for the assessment of thyroid cartilage invasion in patients with laryngeal or hypopharyngeal cancer.

Trustworthy AI for stage IV non-small cell lung cancer: Automatic segmentation and uncertainty quantification.

Dedeken S, Conze PH, Damerjian Pieters V, Gallinato O, Faure J, Colin T, Visvikis D

pubmed logopapersMay 13 2025
Accurate segmentation of lung tumors is essential for advancing personalized medicine in non-small cell lung cancer (NSCLC). However, stage IV NSCLC presents significant challenges due to heterogeneous tumor morphology and the presence of associated conditions including infection, atelectasis and pleural effusion. The complexity of multicentric datasets further complicates robust segmentation across diverse clinical settings. In this study, we evaluate deep-learning-based approaches for automated segmentation of advanced-stage lung tumors using 3D architectures on 387 CT scans from the Deep-Lung-IV study. Through comprehensive experiments, we assess the impact of model design, HU windowing, and dataset size on delineation performance, providing practical guidelines for robust implementation. Additionally, we propose a confidence score using deep ensembles to quantify prediction uncertainty and automate the identification of complex cases that require further review. Our results demonstrate the potential of attention-based architectures and specific preprocessing strategies to improve segmentation quality in such a challenging clinical scenario, while emphasizing the importance of uncertainty estimation to build trustworthy AI systems in medical imaging. Code is available at: https://github.com/Sacha-Dedeken/SegStageIVNSCLC.

Fast cortical thickness estimation using deep learning-based anatomy segmentation and diffeomorphic registration.

Wu J, Zhou S

pubmed logopapersMay 13 2025
Accurately and efficiently estimating the cortical thickness from magnetic resonance images (MRIs) is crucial for neuroscientific studies and clinical applications with various large-scale datasets. Diffeomorphic registration-based cortical thickness estimation (DiReCT) is a prominent traditional method of calculating such measures directly from original MRIs by applying diffeomorphic registration on segmented tissues. However, it suffers from prolonged computational time and limited reproducibility, impediments to its application in large-scale studies or real-time environments. This paper proposes a framework for cortical thickness estimation using deep learning-based anatomy segmentation and diffeomorphic registration. The framework begins by applying a convolutional neural network (CNN) segmentation model to the original image, generating a segmentation map that accurately delineates the cortical boundaries. Subsequently, a pair of distance maps generated from the segmentation map is injected into an unsupervised learning-based registration network for fast and diffeomorphic registration. A novel algorithm based on diffeomorphisms of different time points is proposed to calculate the final thickness map. We systematically evaluated and compared our method with surface-based measures from FreeSurfer on two distinct datasets. The experimental results demonstrated a superior performance of the proposed method, surpassing the performance of DiReCT and DL+DiReCT in terms of time efficiency and consistency with FreeSurfer. Our code and pre-trained models are publicly available at: https://github.com/wujiong-hub/DL-CTE.git.

Automatic deep learning segmentation of mandibular periodontal bone topography on cone-beam computed tomography images.

Palkovics D, Molnar B, Pinter C, García-Mato D, Diaz-Pinto A, Windisch P, Ramseier CA

pubmed logopapersMay 13 2025
This study evaluated the performance of a multi-stage Segmentation Residual Network (SegResNet)-based deep learning (DL) model for the automatic segmentation of cone-beam computed tomography (CBCT) images of patients with stage III and IV periodontitis. Seventy pre-processed CBCT scans from patients undergoing periodontal rehabilitation were used for training and validation. The model was tested on 10 CBCT scans independent from the training dataset by comparing results with semi-automatic (SA) segmentations. Segmentation accuracy was assessed using the Dice similarity coefficient (DSC), Intersection over Union (IoU), and Hausdorff distance 95<sup>th</sup> percentile (HD95). Linear periodontal measurements were performed on four tooth surfaces to assess the validity of the DL segmentation in the periodontal region. The DL model achieved a mean DSC of 0.9650 ± 0.0097, with an IoU of 0.9340 ± 0.0180 and HD95 of 0.4820 mm ± 0.1269 mm, showing strong agreement with SA segmentation. Linear measurements revealed high statistical correlations between the mesial, distal, and lingual surfaces, with intraclass correlation coefficients (ICC) of 0.9442 (p<0.0001), 0.9232 (p<0.0001), and 0.9598(p<0.0001), respectively, while buccal measurements revealed lower consistency, with an ICC of 0.7481 (p<0.0001). The DL method reduced the segmentation time by 47 times compared to the SA method. Acquired 3D models may enable precise treatment planning in cases where conventional diagnostic modalities are insufficient. However, the robustness of the model must be increased to improve its general reliability and consistency at the buccal aspect of the periodontal region. This study presents a DL model for the CBCT-based segmentation of periodontal defects, demonstrating high accuracy and a 47-fold time reduction compared to SA methods, thus improving the feasibility of 3D diagnostics for advanced periodontitis.

A Deep Learning-Driven Inhalation Injury Grading Assistant Using Bronchoscopy Images

Yifan Li, Alan W Pang, Jo Woon Chong

arxiv logopreprintMay 13 2025
Inhalation injuries present a challenge in clinical diagnosis and grading due to Conventional grading methods such as the Abbreviated Injury Score (AIS) being subjective and lacking robust correlation with clinical parameters like mechanical ventilation duration and patient mortality. This study introduces a novel deep learning-based diagnosis assistant tool for grading inhalation injuries using bronchoscopy images to overcome subjective variability and enhance consistency in severity assessment. Our approach leverages data augmentation techniques, including graphic transformations, Contrastive Unpaired Translation (CUT), and CycleGAN, to address the scarcity of medical imaging data. We evaluate the classification performance of two deep learning models, GoogLeNet and Vision Transformer (ViT), across a dataset significantly expanded through these augmentation methods. The results demonstrate GoogLeNet combined with CUT as the most effective configuration for grading inhalation injuries through bronchoscopy images and achieves a classification accuracy of 97.8%. The histograms and frequency analysis evaluations reveal variations caused by the augmentation CUT with distribution changes in the histogram and texture details of the frequency spectrum. PCA visualizations underscore the CUT substantially enhances class separability in the feature space. Moreover, Grad-CAM analyses provide insight into the decision-making process; mean intensity for CUT heatmaps is 119.6, which significantly exceeds 98.8 of the original datasets. Our proposed tool leverages mechanical ventilation periods as a novel grading standard, providing comprehensive diagnostic support.

DEMAC-Net: A Dual-Encoder Multiattention Collaborative Network for Cervical Nerve Pathway and Adjacent Anatomical Structure Segmentation.

Cui H, Duan J, Lin L, Wu Q, Guo W, Zang Q, Zhou M, Fang W, Hu Y, Zou Z

pubmed logopapersMay 13 2025
Currently, cervical anesthesia is performed using three main approaches: superficial cervical plexus block, deep cervical plexus block, and intermediate plexus nerve block. However, each technique carries inherent risks and demands significant clinical expertise. Ultrasound imaging, known for its real-time visualization capabilities and accessibility, is widely used in both diagnostic and interventional procedures. Nevertheless, accurate segmentation of small and irregularly shaped structures such as the cervical and brachial plexuses remains challenging due to image noise, complex anatomical morphology, and limited annotated training data. This study introduces DEMAC-Net-a dual-encoder, multiattention collaborative network-to significantly improve the segmentation accuracy of these neural structures. By precisely identifying the cervical nerve pathway (CNP) and adjacent anatomical tissues, DEMAC-Net aims to assist clinicians, especially those less experienced, in effectively guiding anesthesia procedures and accurately identifying optimal needle insertion points. Consequently, this improvement is expected to enhance clinical safety, reduce procedural risks, and streamline decision-making efficiency during ultrasound-guided regional anesthesia. DEMAC-Net combines a dual-encoder architecture with the Spatial Understanding Convolution Kernel (SUCK) and the Spatial-Channel Attention Module (SCAM) to extract multi-scale features effectively. Additionally, a Global Attention Gate (GAG) and inter-layer fusion modules refine relevant features while suppressing noise. A novel dataset, Neck Ultrasound Dataset (NUSD), was introduced, containing 1,500 annotated ultrasound images across seven anatomical regions. Extensive experiments were conducted on both NUSD and the BUSI public dataset, comparing DEMAC-Net to state-of-the-art models using metrics such as Dice Similarity Coefficient (DSC) and Intersection over Union (IoU). On the NUSD dataset, DEMAC-Net achieved a mean DSC of 93.3%, outperforming existing models. For external validation on the BUSI dataset, it demonstrated superior generalization, achieving a DSC of 87.2% and a mean IoU of 77.4%, surpassing other advanced methods. Notably, DEMAC-Net displayed consistent segmentation stability across all tested structures. The proposed DEMAC-Net significantly improves segmentation accuracy for small nerves and complex anatomical structures in ultrasound images, outperforming existing methods in terms of accuracy and computational efficiency. This framework holds great potential for enhancing ultrasound-guided procedures, such as peripheral nerve blocks, by providing more precise anatomical localization, ultimately improving clinical outcomes.
Page 309 of 3323316 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.