Sort by:
Page 63 of 81807 results

Machine Learning Models in the Detection of MB2 Canal Orifice in CBCT Images.

Shetty S, Yuvali M, Ozsahin I, Al-Bayatti S, Narasimhan S, Alsaegh M, Al-Daghestani H, Shetty R, Castelino R, David LR, Ozsahin DU

pubmed logopapersJun 1 2025
The objective of the present study was to determine the accuracy of machine learning (ML) models in the detection of mesiobuccal (MB2) canals in axial cone-beam computed tomography (CBCT) sections. A total of 2500 CBCT scans from the oral radiology department of University Dental Hospital, Sharjah were screened to obtain 277 high-resolution, small field-of-view CBCT scans with maxillary molars. Among the 277 scans, 160 of them showed the presence of MB2 orifice and the rest (117) did not. Two-dimensional axial images of these scans were then cropped. The images were classified and labelled as N (absence of MB2) and M (presence of MB2) by 2 examiners. The images were embedded using Google's Inception V3 and transferred to the ML classification model. Six different ML models (logistic regression [LR], naïve Bayes [NB], support vector machine [SVM], K-nearest neighbours [Knn], random forest [RF], neural network [NN]) were then tested on their ability to classify the images into M and N. The classification metrics (area under curve [AUC], accuracy, F1-score, precision) of the models were assessed in 3 steps. NN (0.896), LR (0.893), and SVM (0.886) showed the highest values of AUC with specified target variables (steps 2 and 3). The highest accuracy was exhibited by LR (0.849) and NN (0.848) with specified target variables. The highest precision (86.8%) and recall (92.5%) was observed with the SVM model. The success rates (AUC, precision, recall) of ML algorithms in the detection of MB2 were remarkable in our study. It was also observed that when the target variable was specified, significant success rates such as 86.8% in precision and 92.5% in recall were achieved. The present study showed promising results in the ML-based detection of MB2 canal using axial CBCT slices.

Combating Medical Label Noise through more precise partition-correction and progressive hard-enhanced learning.

Zhang S, Chu S, Qiang Y, Zhao J, Wang Y, Wei X

pubmed logopapersJun 1 2025
Computer-aided diagnosis systems based on deep neural networks heavily rely on datasets with high-quality labels. However, manual annotation for lesion diagnosis relies on image features, often requiring professional experience and complex image analysis process. This inevitably introduces noisy labels, which can misguide the training of classification models. Our goal is to design an effective method to address the challenges posed by label noise in medical images. we propose a novel noise-tolerant medical image classification framework consisting of two phases: fore-training correction and progressive hard-sample enhanced learning. In the first phase, we design a dual-branch sample partition detection scheme that effectively classifies each instance into one of three subsets: clean, hard, or noisy. Simultaneously, we propose a hard-sample label refinement strategy based on class prototypes with confidence-perception weighting and an effective joint correction method for noisy samples, enabling the acquisition of higher-quality training data. In the second phase, we design a progressive hard-sample reinforcement learning method to enhance the model's ability to learn discriminative feature representations. This approach accounts for sample difficulty and mitigates the effects of label noise in medical datasets. Our framework achieves an accuracy of 82.39% on the pneumoconiosis dataset collected by our laboratory. On a five-class skin disease dataset with six different levels of label noise (0, 0.05, 0.1, 0.2, 0.3, and 0.4), the average accuracy over the last ten epochs reaches 88.51%, 86.64%, 85.02%, 83.01%, 81.95%, 77.89%, respectively; For binary polyp classification under noise rates of 0.2, 0.3, and 0.4, the average accuracy over the last ten epochs is 97.90%, 93.77%, 89.33%, respectively. The effectiveness of our proposed framework is demonstrated through its performance on three challenging datasets with both real and synthetic noise. Experimental results further demonstrate the robustness of our method across varying noise rates.

Generative adversarial networks in medical image reconstruction: A systematic literature review.

Hussain J, Båth M, Ivarsson J

pubmed logopapersJun 1 2025
Recent advancements in generative adversarial networks (GANs) have demonstrated substantial potential in medical image processing. Despite this progress, reconstructing images from incomplete data remains a challenge, impacting image quality. This systematic literature review explores the use of GANs in enhancing and reconstructing medical imaging data. A document survey of computing literature was conducted using the ACM Digital Library to identify relevant articles from journals and conference proceedings using keyword combinations, such as "generative adversarial networks or generative adversarial network," "medical image or medical imaging," and "image reconstruction." Across the reviewed articles, there were 122 datasets used in 175 instances, 89 top metrics employed 335 times, 10 different tasks with a total count of 173, 31 distinct organs featured in 119 instances, and 18 modalities utilized in 121 instances, collectively depicting significant utilization of GANs in medical imaging. The adaptability and efficacy of GANs were showcased across diverse medical tasks, organs, and modalities, utilizing top public as well as private/synthetic datasets for disease diagnosis, including the identification of conditions like cancer in different anatomical regions. The study emphasized GAN's increasing integration and adaptability in diverse radiology modalities, showcasing their transformative impact on diagnostic techniques, including cross-modality tasks. The intricate interplay between network size, batch size, and loss function refinement significantly impacts GAN's performance, although challenges in training persist. The study underscores GANs as dynamic tools shaping medical imaging, contributing significantly to image quality, training methodologies, and overall medical advancements, positioning them as substantial components driving medical advancements.

Lag-Net: Lag correction for cone-beam CT via a convolutional neural network.

Ren C, Kan S, Huang W, Xi Y, Ji X, Chen Y

pubmed logopapersJun 1 2025
Due to the presence of charge traps in amorphous silicon flat-panel detectors, lag signals are generated in consecutively captured projections. These signals lead to ghosting in projection images and severe lag artifacts in cone-beam computed tomography (CBCT) reconstructions. Traditional Linear Time-Invariant (LTI) correction need to measure lag correction factors (LCF) and may leave residual lag artifacts. This incomplete correction is partly attributed to the lack of consideration for exposure dependency. To measure the lag signals more accurately and suppress lag artifacts, we develop a novel hardware correction method. This method requires two scans of the same object, with adjustments to the operating timing of the CT instrumentation during the second scan to measure the lag signal from the first. While this hardware correction significantly mitigates lag artifacts, it is complex to implement and imposes high demands on the CT instrumentation. To enhance the process, We introduce a deep learning method called Lag-Net to remove lag signal, utilizing the nearly lag-free results from hardware correction as training targets for the network. Qualitative and quantitative analyses of experimental results on both simulated and real datasets demonstrate that deep learning correction significantly outperforms traditional LTI correction in terms of lag artifact suppression and image quality enhancement. Furthermore, the deep learning method achieves reconstruction results comparable to those obtained from hardware correction while avoiding the operational complexities associated with the hardware correction approach. The proposed hardware correction method, despite its operational complexity, demonstrates superior artifact suppression performance compared to the LTI algorithm, particularly under low-exposure conditions. The introduced Lag-Net, which utilizes the results of the hardware correction method as training targets, leverages the end-to-end nature of deep learning to circumvent the intricate operational drawbacks associated with hardware correction. Furthermore, the network's correction efficacy surpasses that of the LTI algorithm in low-exposure scenarios.

MRI-based risk factors for intensive care unit admissions in acute neck infections.

Vierula JP, Merisaari H, Heikkinen J, Happonen T, Sirén A, Velhonoja J, Irjala H, Soukka T, Mattila K, Nyman M, Nurminen J, Hirvonen J

pubmed logopapersJun 1 2025
We assessed risk factors and developed a score to predict intensive care unit (ICU) admissions using MRI findings and clinical data in acute neck infections. This retrospective study included patients with MRI-confirmed acute neck infection. Abscess diameters were measured on post-gadolinium T1-weighted Dixon MRI, and specific edema patterns, retropharyngeal (RPE) and mediastinal edema, were assessed on fat-suppressed T2-weighted Dixon MRI. A multivariate logistic regression model identified ICU admission predictors, with risk scores derived from regression coefficients. Model performance was evaluated using the area under the curve (AUC) from receiver operating characteristic analysis. Machine learning models (random forest, XGBoost, support vector machine, neural networks) were tested. The sample included 535 patients, of whom 373 (70 %) had an abscess, and 62 (12 %) required ICU treatment. Significant predictors for ICU admission were RPE, maximal abscess diameter (≥40 mm), and C-reactive protein (CRP) (≥172 mg/L). The risk score (0-7) (AUC=0.82, 95 % confidence interval [CI] 0.77-0.88) outperformed CRP (AUC=0.73, 95 % CI 0.66-0.80, p = 0.001), maximal abscess diameter (AUC=0.72, 95 % CI 0.64-0.80, p < 0.001), and RPE (AUC=0.71, 95 % CI 0.65-0.77, p < 0.001). The risk score at a cut-off > 3 yielded the following metrics: sensitivity 66 %, specificity 82 %, positive predictive value 33 %, negative predictive value 95 %, accuracy 80 %, and odds ratio 9.0. Discriminative performance was robust in internal (AUC=0.83) and hold-out (AUC=0.81) validations. ML models were not better than regression models. A risk model incorporating RPE, abscess size, and CRP showed moderate accuracy and high negative predictive value for ICU admissions, supporting MRI's role in acute neck infections.

Atten-Nonlocal Unet: Attention and Non-local Unet for medical image segmentation.

Jia X, Wang W, Zhang M, Zhao B

pubmed logopapersJun 1 2025
The convolutional neural network(CNN)-based models have emerged as the predominant approach for medical image segmentation due to their effective inductive bias. However, their limitation lies in the lack of long-range information. In this study, we propose the Atten-Nonlocal Unet model that integrates CNN and transformer to overcome this limitation and precisely capture global context in 2D features. Specifically, we utilize the BCSM attention module and the Cross Non-local module to enhance feature representation, thereby improving the segmentation accuracy. Experimental results on the Synapse, ACDC, and AVT datasets show that Atten-Nonlocal Unet achieves DSC scores of 84.15%, 91.57%, and 86.94% respectively, and has 95% HD of 15.17, 1.16, and 4.78 correspondingly. Compared to the existing methods for medical image segmentation, the proposed method demonstrates superior segmentation performance, ensuring high accuracy in segmenting large organs while improving segmentation for small organs.

WAND: Wavelet Analysis-Based Neural Decomposition of MRS Signals for Artifact Removal.

Merkofer JP, van de Sande DMJ, Amirrajab S, Min Nam K, van Sloun RJG, Bhogal AA

pubmed logopapersJun 1 2025
Accurate quantification of metabolites in magnetic resonance spectroscopy (MRS) is challenged by low signal-to-noise ratio (SNR), overlapping metabolites, and various artifacts. Particularly, unknown and unparameterized baseline effects obscure the quantification of low-concentration metabolites, limiting MRS reliability. This paper introduces wavelet analysis-based neural decomposition (WAND), a novel data-driven method designed to decompose MRS signals into their constituent components: metabolite-specific signals, baseline, and artifacts. WAND takes advantage of the enhanced separability of these components within the wavelet domain. The method employs a neural network, specifically a U-Net architecture, trained to predict masks for wavelet coefficients obtained through the continuous wavelet transform. These masks effectively isolate desired signal components in the wavelet domain, which are then inverse-transformed to obtain separated signals. Notably, an artifact mask is created by inverting the sum of all known signal masks, enabling WAND to capture and remove even unpredictable artifacts. The effectiveness of WAND in achieving accurate decomposition is demonstrated through numerical evaluations using simulated spectra. Furthermore, WAND's artifact removal capabilities significantly enhance the quantification accuracy of linear combination model fitting. The method's robustness is further validated using data from the 2016 MRS Fitting Challenge and in vivo experiments.

Driving Knowledge to Action: Building a Better Future With Artificial Intelligence-Enabled Multidisciplinary Oncology.

Loaiza-Bonilla A, Thaker N, Chung C, Parikh RB, Stapleton S, Borkowski P

pubmed logopapersJun 1 2025
Artificial intelligence (AI) is transforming multidisciplinary oncology at an unprecedented pace, redefining how clinicians detect, classify, and treat cancer. From earlier and more accurate diagnoses to personalized treatment planning, AI's impact is evident across radiology, pathology, radiation oncology, and medical oncology. By leveraging vast and diverse data-including imaging, genomic, clinical, and real-world evidence-AI algorithms can uncover complex patterns, accelerate drug discovery, and help identify optimal treatment regimens for each patient. However, realizing the full potential of AI also necessitates addressing concerns regarding data quality, algorithmic bias, explainability, privacy, and regulatory oversight-especially in low- and middle-income countries (LMICs), where disparities in cancer care are particularly pronounced. This study provides a comprehensive overview of how AI is reshaping cancer care, reviews its benefits and challenges, and outlines ethical and policy implications in line with ASCO's 2025 theme, <i>Driving Knowledge to Action.</i> We offer concrete calls to action for clinicians, researchers, industry stakeholders, and policymakers to ensure that AI-driven, patient-centric oncology is accessible, equitable, and sustainable worldwide.

Artificial Intelligence for Teaching Case Curation: Evaluating Model Performance on Imaging Report Discrepancies.

Bartley M, Huemann Z, Hu J, Tie X, Ross AB, Kennedy T, Warner JD, Bradshaw T, Lawrence EM

pubmed logopapersJun 1 2025
Assess the feasibility of using a large language model (LLM) to identify valuable radiology teaching cases through report discrepancy detection. Retrospective study included after-hours head CT and musculoskeletal radiograph exams from January 2017 to December 2021. Discrepancy level between trainee's preliminary interpretation and final attending report was annotated on a 5-point scale. RadBERT, an LLM pretrained on a vast corpus of radiology text, was fine-tuned for discrepancy detection. For comparison and to ensure the robustness of the approach, Mixstral 8×7B, Mistral 7B, and Llama2 were also evaluated. The model's performance in detecting discrepancies was evaluated using a randomly selected hold-out test set. A subset of discrepant cases identified by the LLM was compared to a random case set by recording clinical parameters, discrepant pathology, and evaluating possible educational value. F1 statistic was used for model comparison. Pearson's chi-squared test was employed to assess discrepancy prevalence and score between groups (significance set at p<0.05). The fine-tuned LLM model achieved an overall accuracy of 90.5% with a specificity of 95.5% and a sensitivity of 66.3% for discrepancy detection. The model sensitivity significantly improved with higher discrepancy scores, 49% (34/70) for score 2 versus 67% (47/62) for score 3, and 81% (35/43) for score 4/5 (p<0.05 compared to score 2). LLM-curated set showed a significant increase in the prevalence of all discrepancies and major discrepancies (scores 4 or 5) compared to a random case set (P<0.05 for both). Evaluation of the clinical characteristics from both the random and discrepant case sets demonstrated a broad mix of pathologies and discrepancy types. An LLM can detect trainee report discrepancies, including both higher and lower-scoring discrepancies, and may improve case set curation for resident education as well as serve as a trainee oversight tool.

Deep Learning to Localize Photoacoustic Sources in Three Dimensions: Theory and Implementation.

Gubbi MR, Bell MAL

pubmed logopapersJun 1 2025
Surgical tool tip localization and tracking are essential components of surgical and interventional procedures. The cross sections of tool tips can be considered as acoustic point sources to achieve these tasks with deep learning applied to photoacoustic channel data. However, source localization was previously limited to the lateral and axial dimensions of an ultrasound transducer. In this article, we developed a novel deep learning-based 3-D photoacoustic point source localization system using an object detection-based approach extended from our previous work. In addition, we derived theoretical relationships among point source locations, sound speeds, and waveform shapes in raw photoacoustic channel data frames. We then used this theory to develop a novel deep learning instance segmentation-based 3-D point source localization system. When tested with 4000 simulated, 993 phantom, and 1983 ex vivo channel data frames, the two systems achieved F1 scores as high as 99.82%, 93.05%, and 98.20%, respectively, and Euclidean localization errors (mean ± one standard deviation) as low as ${1.46} \; \pm \; {1.11}$ mm, ${1.58} \; \pm \; {1.30}$ mm, and ${1.55} \; \pm \; {0.86}$ mm, respectively. In addition, the instance segmentation-based system simultaneously estimated sound speeds with absolute errors (mean ± one standard deviation) of ${19.22} \; \pm \; {26.26}$ m/s in simulated data and standard deviations ranging 14.6-32.3 m/s in experimental data. These results demonstrate the potential of the proposed photoacoustic imaging-based methods to localize and track tool tips in three dimensions during surgical and interventional procedures.
Page 63 of 81807 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.