Sort by:
Page 122 of 1351344 results

Neural Network-based Automated Classification of 18F-FDG PET/CT Lesions and Prognosis Prediction in Nasopharyngeal Carcinoma Without Distant Metastasis.

Lv Y, Zheng D, Wang R, Zhou Z, Gao Z, Lan X, Qin C

pubmed logopapersMay 9 2025
To evaluate the diagnostic performance of the PET Assisted Reporting System (PARS) in nasopharyngeal carcinoma (NPC) patients without distant metastasis, and to investigate the prognostic significance of the metabolic parameters. Eighty-three NPC patients who underwent pretreatment 18F-FDG PET/CT were retrospectively collected. First, the sensitivity, specificity, and accuracy of PARS for diagnosing malignant lesions were calculated, using histopathology as the gold standard. Next, metabolic parameters of the primary tumor were derived using both PARS and manual segmentation. The differences and consistency between the 2 methods were analyzed. Finally, the prognostic value of PET metabolic parameters was evaluated. Prognostic analysis of progression-free survival (PFS) and overall survival (OS) was conducted. PARS demonstrated high patient-based accuracy (97.2%), sensitivity (88.9%), and specificity (97.4%), and 96.7%, 84.0%, and 96.9% based on lesions. Manual segmentation yielded higher metabolic tumor volume (MTV) and total lesion glycolysis (TLG) than PARS. Metabolic parameters from both methods were highly correlated and consistent. ROC analysis showed metabolic parameters exhibited differences in prognostic prediction, but generally performed well in predicting 3-year PFS and OS overall. MTV and age were independent prognostic factors; Cox proportional-hazards models incorporating them showed significant predictive improvements when combined. Kaplan-Meier analysis confirmed better prognosis in the low-risk group based on combined indicators (χ² = 42.25, P < 0.001; χ² = 20.44, P < 0.001). Preliminary validation of PARS in NPC patients without distant metastasis shows high diagnostic sensitivity and accuracy for lesion identification and classification, and metabolic parameters correlate well with manual. MTV reflects prognosis, and its combination with age enhances prognostic prediction and risk stratification.

LMLCC-Net: A Semi-Supervised Deep Learning Model for Lung Nodule Malignancy Prediction from CT Scans using a Novel Hounsfield Unit-Based Intensity Filtering

Adhora Madhuri, Nusaiba Sobir, Tasnia Binte Mamun, Taufiq Hasan

arxiv logopreprintMay 9 2025
Lung cancer is the leading cause of patient mortality in the world. Early diagnosis of malignant pulmonary nodules in CT images can have a significant impact on reducing disease mortality and morbidity. In this work, we propose LMLCC-Net, a novel deep learning framework for classifying nodules from CT scan images using a 3D CNN, considering Hounsfield Unit (HU)-based intensity filtering. Benign and malignant nodules have significant differences in their intensity profile of HU, which was not exploited in the literature. Our method considers the intensity pattern as well as the texture for the prediction of malignancies. LMLCC-Net extracts features from multiple branches that each use a separate learnable HU-based intensity filtering stage. Various combinations of branches and learnable ranges of filters were explored to finally produce the best-performing model. In addition, we propose a semi-supervised learning scheme for labeling ambiguous cases and also developed a lightweight model to classify the nodules. The experimental evaluations are carried out on the LUNA16 dataset. Our proposed method achieves a classification accuracy (ACC) of 91.96%, a sensitivity (SEN) of 92.04%, and an area under the curve (AUC) of 91.87%, showing improved performance compared to existing methods. The proposed method can have a significant impact in helping radiologists in the classification of pulmonary nodules and improving patient care.

Robust & Precise Knowledge Distillation-based Novel Context-Aware Predictor for Disease Detection in Brain and Gastrointestinal

Saif Ur Rehman Khan, Muhammad Nabeel Asim, Sebastian Vollmer, Andreas Dengel

arxiv logopreprintMay 9 2025
Medical disease prediction, particularly through imaging, remains a challenging task due to the complexity and variability of medical data, including noise, ambiguity, and differing image quality. Recent deep learning models, including Knowledge Distillation (KD) methods, have shown promising results in brain tumor image identification but still face limitations in handling uncertainty and generalizing across diverse medical conditions. Traditional KD methods often rely on a context-unaware temperature parameter to soften teacher model predictions, which does not adapt effectively to varying uncertainty levels present in medical images. To address this issue, we propose a novel framework that integrates Ant Colony Optimization (ACO) for optimal teacher-student model selection and a novel context-aware predictor approach for temperature scaling. The proposed context-aware framework adjusts the temperature based on factors such as image quality, disease complexity, and teacher model confidence, allowing for more robust knowledge transfer. Additionally, ACO efficiently selects the most appropriate teacher-student model pair from a set of pre-trained models, outperforming current optimization methods by exploring a broader solution space and better handling complex, non-linear relationships within the data. The proposed framework is evaluated using three publicly available benchmark datasets, each corresponding to a distinct medical imaging task. The results demonstrate that the proposed framework significantly outperforms current state-of-the-art methods, achieving top accuracy rates: 98.01% on the MRI brain tumor (Kaggle) dataset, 92.81% on the Figshare MRI dataset, and 96.20% on the GastroNet dataset. This enhanced performance is further evidenced by the improved results, surpassing existing benchmarks of 97.24% (Kaggle), 91.43% (Figshare), and 95.00% (GastroNet).

APD-FFNet: A Novel Explainable Deep Feature Fusion Network for Automated Periodontitis Diagnosis on Dental Panoramic Radiography.

Resul ES, Senirkentli GB, Bostanci E, Oduncuoglu BF

pubmed logopapersMay 9 2025
This study introduces APD-FFNet, a novel, explainable deep learning architecture for automated periodontitis diagnosis using panoramic radiographs. A total of 337 panoramic radiographs, annotated by a periodontist, served as the dataset. APD-FFNet combines custom convolutional and transformer-based layers within a deep feature fusion framework that captures both local and global contextual features. Performance was evaluated using accuracy, the F1 score, the area under the receiver operating characteristic curve, the Jaccard similarity coefficient, and the Matthews correlation coefficient. McNemar's test confirmed statistical significance, and SHapley Additive exPlanations provided interpretability insights. APD-FFNet achieved 94% accuracy, a 93.88% F1 score, 93.47% area under the receiver operating characteristic curve, 88.47% Jaccard similarity coefficient, and 88.46% Matthews correlation coefficient, surpassing comparable approaches. McNemar's test validated these findings (p < 0.05). Explanations generated by SHapley Additive exPlanations highlighted important regions in each radiograph, supporting clinical applicability. By merging convolutional and transformer-based layers, APD-FFNet establishes a new benchmark in automated, interpretable periodontitis diagnosis, with low hyperparameter sensitivity facilitating its integration into regular dental practice. Its adaptable design suggests broader relevance to other medical imaging domains. This is the first feature fusion method specifically devised for periodontitis diagnosis, supported by an expert-curated dataset and advanced explainable artificial intelligence. Its robust accuracy, low hyperparameter sensitivity, and transparent outputs set a new standard for automated periodontal analysis.

KEVS: enhancing segmentation of visceral adipose tissue in pre-cystectomy CT with Gaussian kernel density estimation.

Boucher T, Tetlow N, Fung A, Dewar A, Arina P, Kerneis S, Whittle J, Mazomenos EB

pubmed logopapersMay 9 2025
The distribution of visceral adipose tissue (VAT) in cystectomy patients is indicative of the incidence of postoperative complications. Existing VAT segmentation methods for computed tomography (CT) employing intensity thresholding have limitations relating to inter-observer variability. Moreover, the difficulty in creating ground-truth masks limits the development of deep learning (DL) models for this task. This paper introduces a novel method for VAT prediction in pre-cystectomy CT, which is fully automated and does not require ground-truth VAT masks for training, overcoming aforementioned limitations. We introduce the kernel density-enhanced VAT segmentator (KEVS), combining a DL semantic segmentation model, for multi-body feature prediction, with Gaussian kernel density estimation analysis of predicted subcutaneous adipose tissue to achieve accurate scan-specific predictions of VAT in the abdominal cavity. Uniquely for a DL pipeline, KEVS does not require ground-truth VAT masks. We verify the ability of KEVS to accurately segment abdominal organs in unseen CT data and compare KEVS VAT segmentation predictions to existing state-of-the-art (SOTA) approaches in a dataset of 20 pre-cystectomy CT scans, collected from University College London Hospital (UCLH-Cyst), with expert ground-truth annotations. KEVS presents a <math xmlns="http://www.w3.org/1998/Math/MathML"><mrow><mn>4.80</mn> <mo>%</mo></mrow> </math> and <math xmlns="http://www.w3.org/1998/Math/MathML"><mrow><mn>6.02</mn> <mo>%</mo></mrow> </math> improvement in Dice coefficient over the second best DL and thresholding-based VAT segmentation techniques respectively when evaluated on UCLH-Cyst. This research introduces KEVS, an automated, SOTA method for the prediction of VAT in pre-cystectomy CT which eliminates inter-observer variability and is trained entirely on open-source CT datasets which do not contain ground-truth VAT masks.

Shortcut learning leads to sex bias in deep learning models for photoacoustic tomography.

Knopp M, Bender CJ, Holzwarth N, Li Y, Kempf J, Caranovic M, Knieling F, Lang W, Rother U, Seitel A, Maier-Hein L, Dreher KK

pubmed logopapersMay 9 2025
Shortcut learning has been identified as a source of algorithmic unfairness in medical imaging artificial intelligence (AI), but its impact on photoacoustic tomography (PAT), particularly concerning sex bias, remains underexplored. This study investigates this issue using peripheral artery disease (PAD) diagnosis as a specific clinical application. To examine the potential for sex bias due to shortcut learning in convolutional neural network (CNNs) and assess how such biases might affect diagnostic predictions, we created training and test datasets with varying PAD prevalence between sexes. Using these datasets, we explored (1) whether CNNs can classify the sex from imaging data, (2) how sex-specific prevalence shifts impact PAD diagnosis performance and underdiagnosis disparity between sexes, and (3) how similarly CNNs encode sex and PAD features. Our study with 147 individuals demonstrates that CNNs can classify the sex from calf muscle PAT images, achieving an AUROC of 0.75. For PAD diagnosis, models trained on data with imbalanced sex-specific disease prevalence experienced significant performance drops (up to 0.21 AUROC) when applied to balanced test sets. Additionally, greater imbalances in sex-specific prevalence within the training data exacerbated underdiagnosis disparities between sexes. Finally, we identify evidence of shortcut learning by demonstrating the effective reuse of learned feature representations between PAD diagnosis and sex classification tasks. CNN-based models trained on PAT data may engage in shortcut learning by leveraging sex-related features, leading to biased and unreliable diagnostic predictions. Addressing demographic-specific prevalence imbalances and preventing shortcut learning is critical for developing models in the medical field that are both accurate and equitable across diverse patient populations.

Application of Artificial Intelligence in Cardio-Oncology Imaging for Cancer Therapy-Related Cardiovascular Toxicity: Systematic Review.

Mushcab H, Al Ramis M, AlRujaib A, Eskandarani R, Sunbul T, AlOtaibi A, Obaidan M, Al Harbi R, Aljabri D

pubmed logopapersMay 9 2025
Artificial intelligence (AI) is a revolutionary tool yet to be fully integrated into several health care sectors, including medical imaging. AI can transform how medical imaging is conducted and interpreted, especially in cardio-oncology. This study aims to systematically review the available literature on the use of AI in cardio-oncology imaging to predict cardiotoxicity and describe the possible improvement of different imaging modalities that can be achieved if AI is successfully deployed to routine practice. We conducted a database search in PubMed, Ovid MEDLINE, Cochrane Library, CINAHL, and Google Scholar from inception to 2023 using the AI research assistant tool (Elicit) to search for original studies reporting AI outcomes in adult patients diagnosed with any cancer and undergoing cardiotoxicity assessment. Outcomes included incidence of cardiotoxicity, left ventricular ejection fraction, risk factors associated with cardiotoxicity, heart failure, myocardial dysfunction, signs of cancer therapy-related cardiovascular toxicity, echocardiography, and cardiac magnetic resonance imaging. Descriptive information about each study was recorded, including imaging technique, AI model, outcomes, and limitations. The systematic search resulted in 7 studies conducted between 2018 and 2023, which are included in this review. Most of these studies were conducted in the United States (71%), included patients with breast cancer (86%), and used magnetic resonance imaging as the imaging modality (57%). The quality assessment of the studies had an average of 86% compliance in all of the tool's sections. In conclusion, this systematic review demonstrates the potential of AI to enhance cardio-oncology imaging for predicting cardiotoxicity in patients with cancer. Our findings suggest that AI can enhance the accuracy and efficiency of cardiotoxicity assessments. However, further research through larger, multicenter trials is needed to validate these applications and refine AI technologies for routine use, paving the way for improved patient outcomes in cancer survivors at risk of cardiotoxicity.

Predicting Knee Osteoarthritis Severity from Radiographic Predictors: Data from the Osteoarthritis Initiative.

Nurmirinta TAT, Turunen MJ, Tohka J, Mononen ME, Liukkonen MK

pubmed logopapersMay 9 2025
In knee osteoarthritis (KOA) treatment, preventive measures to reduce its onset risk are a key factor. Among individuals with radiographically healthy knees, however, future knee joint integrity and condition cannot be predicted by clinically applicable methods. We investigated if knee joint morphology derived from widely accessible and cost-effective radiographs could be helpful in predicting future knee joint integrity and condition. We combined knee joint morphology with known risk predictors such as age, height, and weight. Baseline data were utilized as predictors, and the maximal severity of KOA after 8 years served as a target variable. The three KOA categories in this study were based on Kellgren-Lawrence grading: healthy, moderate, and severe. We employed a two-stage machine learning model that utilized two random forest algorithms. We trained three models: the subject demographics (SD) model utilized only SD; the image model utilized only knee joint morphology from radiographs; the merged model utilized combined predictors. The training data comprised an 8-year follow-up of 1222 knees from 683 individuals. The SD- model obtained a weighted F1 score (WF1) of 77.2% and a balanced accuracy (BA) of 65.6%. The Image-model performance metrics were lowest, with a WF1 of 76.5% and BA of 63.8%. The top-performing merged model achieved a WF1 score of 78.3% and a BA of 68.2%. Our two-stage prediction model provided improved results based on performance metrics, suggesting potential for application in clinical settings.

Towards Better Cephalometric Landmark Detection with Diffusion Data Generation

Dongqian Guo, Wencheng Han, Pang Lyu, Yuxi Zhou, Jianbing Shen

arxiv logopreprintMay 9 2025
Cephalometric landmark detection is essential for orthodontic diagnostics and treatment planning. Nevertheless, the scarcity of samples in data collection and the extensive effort required for manual annotation have significantly impeded the availability of diverse datasets. This limitation has restricted the effectiveness of deep learning-based detection methods, particularly those based on large-scale vision models. To address these challenges, we have developed an innovative data generation method capable of producing diverse cephalometric X-ray images along with corresponding annotations without human intervention. To achieve this, our approach initiates by constructing new cephalometric landmark annotations using anatomical priors. Then, we employ a diffusion-based generator to create realistic X-ray images that correspond closely with these annotations. To achieve precise control in producing samples with different attributes, we introduce a novel prompt cephalometric X-ray image dataset. This dataset includes real cephalometric X-ray images and detailed medical text prompts describing the images. By leveraging these detailed prompts, our method improves the generation process to control different styles and attributes. Facilitated by the large, diverse generated data, we introduce large-scale vision detection models into the cephalometric landmark detection task to improve accuracy. Experimental results demonstrate that training with the generated data substantially enhances the performance. Compared to methods without using the generated data, our approach improves the Success Detection Rate (SDR) by 6.5%, attaining a notable 82.2%. All code and data are available at: https://um-lab.github.io/cepha-generation

Hybrid Learning: A Novel Combination of Self-Supervised and Supervised Learning for MRI Reconstruction without High-Quality Training Reference

Haoyang Pei, Ding Xia, Xiang Xu, William Moore, Yao Wang, Hersh Chandarana, Li Feng

arxiv logopreprintMay 9 2025
Purpose: Deep learning has demonstrated strong potential for MRI reconstruction, but conventional supervised learning methods require high-quality reference images, which are often unavailable in practice. Self-supervised learning offers an alternative, yet its performance degrades at high acceleration rates. To overcome these limitations, we propose hybrid learning, a novel two-stage training framework that combines self-supervised and supervised learning for robust image reconstruction. Methods: Hybrid learning is implemented in two sequential stages. In the first stage, self-supervised learning is employed to generate improved images from noisy or undersampled reference data. These enhanced images then serve as pseudo-ground truths for the second stage, which uses supervised learning to refine reconstruction performance and support higher acceleration rates. We evaluated hybrid learning in two representative applications: (1) accelerated 0.55T spiral-UTE lung MRI using noisy reference data, and (2) 3D T1 mapping of the brain without access to fully sampled ground truth. Results: For spiral-UTE lung MRI, hybrid learning consistently improved image quality over both self-supervised and conventional supervised methods across different acceleration rates, as measured by SSIM and NMSE. For 3D T1 mapping, hybrid learning achieved superior T1 quantification accuracy across a wide dynamic range, outperforming self-supervised learning in all tested conditions. Conclusions: Hybrid learning provides a practical and effective solution for training deep MRI reconstruction networks when only low-quality or incomplete reference data are available. It enables improved image quality and accurate quantitative mapping across different applications and field strengths, representing a promising technique toward broader clinical deployment of deep learning-based MRI.
Page 122 of 1351344 results
Show
per page
Get Started

Upload your X-ray image and get interpretation.

Upload now →

Disclaimer: X-ray Interpreter's AI-generated results are for informational purposes only and not a substitute for professional medical advice. Always consult a healthcare professional for medical diagnosis and treatment.