Sort by:
Page 6 of 1321311 results

An efficient deep learning based approach for automated identification of cervical vertebrae fracture as a clinical support aid.

Singh M, Tripathi U, Patel KK, Mohit K, Pathak S

pubmed logopapersJul 15 2025
Cervical vertebrae fractures pose a significant risk to a patient's health. The accurate diagnosis and prompt treatment need to be provided for effective treatment. Moreover, the automated analysis of the cervical vertebrae fracture is of utmost important, as deep learning models have been widely used and play significant role in identification and classification. In this paper, we propose a novel hybrid transfer learning approach for the identification and classification of fractures in axial CT scan slices of the cervical spine. We utilize the publicly available RSNA (Radiological Society of North America) dataset of annotated cervical vertebrae fractures for our experiments. The CT scan slices undergo preprocessing and analysis to extract features, employing four distinct pre-trained transfer learning models to detect abnormalities in the cervical vertebrae. The top-performing model, Inception-ResNet-v2, is combined with the upsampling component of U-Net to form a hybrid architecture. The hybrid model demonstrates superior performance over traditional deep learning models, achieving an overall accuracy of 98.44% on 2,984 test CT scan slices, which represents a 3.62% improvement over the 95% accuracy of predictions made by radiologists. This study advances clinical decision support systems, equipping medical professionals with a powerful tool for timely intervention and accurate diagnosis of cervical vertebrae fractures, thereby enhancing patient outcomes and healthcare efficiency.

Learning quality-guided multi-layer features for classifying visual types with ball sports application.

Huang X, Liu T, Yu Y

pubmed logopapersJul 15 2025
Nowadays, breast cancer is one of the leading causes of death among women. This highlights the need for precise X-ray image analysis in the medical and imaging fields. In this study, we present an advanced perceptual deep learning framework that extracts key features from large X-ray datasets, mimicking human visual perception. We begin by using a large dataset of breast cancer images and apply the BING objectness measure to identify relevant visual and semantic patches. To manage the large number of object-aware patches, we propose a new ranking technique in the weak annotation context. This technique identifies the patches that are most aligned with human visual judgment. These key patches are then aggregated to extract meaningful features from each image. We leverage these features to train a multi-class SVM classifier, which categorizes the images into various breast cancer stages. The effectiveness of our deep learning model is demonstrated through extensive comparative analysis and visual examples.

Performance of a screening-trained DL model for pulmonary nodule malignancy estimation of incidental clinical nodules.

Dinnessen R, Peeters D, Antonissen N, Mohamed Hoesein FAA, Gietema HA, Scholten ET, Schaefer-Prokop C, Jacobs C

pubmed logopapersJul 15 2025
To test the performance of a DL model developed and validated for screen-detected pulmonary nodules on incidental nodules detected in a clinical setting. A retrospective dataset of incidental pulmonary nodules sized 5-15 mm was collected, and a subset of size-matched solid nodules was selected. The performance of the DL model was compared to the Brock model. AUCs with 95% CIs were compared using the DeLong method. Sensitivity and specificity were determined at various thresholds, using a 10% threshold for the Brock model as reference. The model's calibration was visually assessed. The dataset included 49 malignant and 359 benign solid or part-solid nodules, and the size-matched dataset included 47 malignant and 47 benign solid nodules. In the complete dataset, AUCs [95% CI] were 0.89 [0.85, 0.93] for the DL model and 0.86 [0.81, 0.92] for the Brock model (p = 0.27). In the size-matched subset, AUCs of the DL and Brock models were 0.78 [0.69, 0.88] and 0.58 [0.46, 0.69] (p < 0.01), respectively. At a 10% threshold, the Brock model had a sensitivity of 0.49 [0.35, 0.63] and a specificity of 0.92 [0.89, 0.94]. At a threshold of 17%, the DL model matched the specificity of the Brock model at the 10% threshold, but had a higher sensitivity (0.57 [0.43, 0.71]). Calibration analysis revealed that the DL model overestimated the malignancy probability. The DL model demonstrated good discriminatory performance in a dataset of incidental nodules and outperformed the Brock model, but may need recalibration for clinical practice. Question What is the performance of a DL model for pulmonary nodule malignancy risk estimation developed on screening data in a dataset of incidentally detected nodules? Findings The DL model performed well on a dataset of nodules from clinical routine care and outperformed the Brock model in a size-matched subset. Clinical relevance This study provides further evidence about the potential of DL models for risk stratification of incidental nodules, which may improve nodule management in routine clinical practice.

Enhancing breast positioning quality through real-time AI feedback.

Sexauer R, Riehle F, Borkowski K, Ruppert C, Potthast S, Schmidt N

pubmed logopapersJul 15 2025
Enhance mammography quality to increase cancer detection by implementing continuous AI-driven feedback mechanisms, ensuring reliable, consistent, and high-quality screening by the 'Perfect', 'Good', 'Moderate', and 'Inadequate' (PGMI) criteria. To assess the impact of the AI software 'b-box<sup>TM</sup>' on mammography quality, we conducted a comparative analysis of PGMI scores. We evaluated scores 50 days before (A) and after the software's implementation in 2021 (B), along with assessments made in the first week of August 2022 (C1) and 2023 (C2), comparing them to evaluations conducted by two readers. Except for postsurgical patients, we included all diagnostic and screening mammograms from one tertiary hospital. A total of 4577 mammograms from 1220 women (mean age: 59, range: 21-94, standard deviation: 11.18) were included. 1728 images were obtained before (A) and 2330 images after the 2021 software implementation (B), along with 269 images in 2022 (C1) and 250 images in 2023 (C2). The results indicated a significant improvement in diagnostic image quality (p < 0.01). The percentage of 'Perfect' examinations rose from 22.34% to 32.27%, while 'Inadequate' images decreased from 13.31% to 5.41% in 2021, continuing the positive trend with 4.46% and 3.20% 'inadequate' images in 2022 and 2023, respectively (p < 0.01). Using a reliable software platform to perform AI-driven quality evaluation in real-time has the potential to make lasting improvements in image quality, support radiographers' professional growth, and elevate institutional quality standards and documentation simultaneously. Question How can AI-powered quality assessment reduce inadequate mammographic quality, which is known to impact sensitivity and increase the risk of interval cancers? Findings AI implementation decreased 'inadequate' mammograms from 13.31% to 3.20% and substantially improved parenchyma visualization, with consistent subgroup trends. Clinical relevance By reducing 'inadequate' mammograms and enhancing imaging quality, AI-driven tools improve diagnostic reliability and support better outcomes in breast cancer screening.

Deep Learning for Osteoporosis Diagnosis Using Magnetic Resonance Images of Lumbar Vertebrae.

Mousavinasab SM, Hedyehzadeh M, Mousavinasab ST

pubmed logopapersJul 15 2025
This work uses T1, STIR, and T2 MRI sequences of the lumbar vertebrae and BMD measurements to identify osteoporosis using deep learning. An analysis of 1350 MRI images from 50 individuals who had simultaneous BMD and MRI scans was performed. The accuracy of a custom convolution neural network for osteoporosis categorization was assessed using deep learning. T2-weighted MRIs were most diagnostic. The suggested model outperformed T1 and STIR sequences with 88.5% accuracy, 88.9% sensitivity, and 76.1% F1-score. Modern deep learning models like GoogleNet, EfficientNet-B3, ResNet50, InceptionV3, and InceptionResNetV2 were compared to its performance. These designs performed well, but our model was more sensitive and accurate. This research shows that T2-weighted MRI is the best sequence for osteoporosis diagnosis and that deep learning overcomes BMD-based approaches by reducing ionizing radiation. These results support clinical use of deep learning with MRI for safe, accurate, and quick osteoporosis diagnosis.

Preoperative prediction value of 2.5D deep learning model based on contrast-enhanced CT for lymphovascular invasion of gastric cancer.

Sun X, Wang P, Ding R, Ma L, Zhang H, Zhu L

pubmed logopapersJul 15 2025
To develop and validate artificial intelligence models based on contrast-enhanced CT(CECT) images of venous phase using deep learning (DL) and Radiomics approaches to predict lymphovascular invasion in gastric cancer prior to surgery. We retrospectively analyzed data from 351 gastric cancer patients, randomly splitting them into two cohorts (training cohort, n = 246; testing cohort, n = 105) in a 7:3 ratio. The tumor region of interest (ROI) was outlined on venous phase CT images as the input for the development of radiomics, 2D and 3D DL models (DL2D and DL3D). Of note, by centering the analysis on the tumor's maximum cross-section and incorporating seven adjacent 2D images, we generated stable 2.5D data to establish a multi-instance learning (MIL) model. Meanwhile, the clinical and feature-combined models which integrated traditional CT enhancement parameters (Ratio), radiomics, and MIL features were also constructed. Models' performance was evaluated by the area under the curve (AUC), confusion matrices, and detailed metrics, such as sensitivity and specificity. A nomogram based on the combined model was established and applied to clinical practice. The calibration curve was used to evaluate the consistency between the predicted LVI of each model and the actual LVI of gastric cancer, and the decision curve analysis (DCA) was used to evaluate the net benefit of each model. Among the developed models, 2.5D MIL and combined models exhibited the superior performance in comparison to the clinical model, the radiomics model, the DL2D model, and the DL3D model as evidenced by the AUC values of 0.820, 0.822, 0.748, 0.725, 0.786, and 0.711 on testing set, respectively. Additionally, the 2.5D MIL and combined models also showed good calibration for LVI prediction, and could provide a net clinical benefit when the threshold probability ranged from 0.31 to 0.98, and from 0.28 to 0.84, indicating their clinical usefulness. The MIL and combined models highlight their performance in predicting preoperative lymphovascular invasion in gastric cancer, offering valuable insights for clinicians in selecting appropriate treatment options for gastric cancer patients.

Interpretable Prediction of Lymph Node Metastasis in Rectal Cancer MRI Using Variational Autoencoders

Benjamin Keel, Aaron Quyn, David Jayne, Maryam Mohsin, Samuel D. Relton

arxiv logopreprintJul 15 2025
Effective treatment for rectal cancer relies on accurate lymph node metastasis (LNM) staging. However, radiological criteria based on lymph node (LN) size, shape and texture morphology have limited diagnostic accuracy. In this work, we investigate applying a Variational Autoencoder (VAE) as a feature encoder model to replace the large pre-trained Convolutional Neural Network (CNN) used in existing approaches. The motivation for using a VAE is that the generative model aims to reconstruct the images, so it directly encodes visual features and meaningful patterns across the data. This leads to a disentangled and structured latent space which can be more interpretable than a CNN. Models are deployed on an in-house MRI dataset with 168 patients who did not undergo neo-adjuvant treatment. The post-operative pathological N stage was used as the ground truth to evaluate model predictions. Our proposed model 'VAE-MLP' achieved state-of-the-art performance on the MRI dataset, with cross-validated metrics of AUC 0.86 +/- 0.05, Sensitivity 0.79 +/- 0.06, and Specificity 0.85 +/- 0.05. Code is available at: https://github.com/benkeel/Lymph_Node_Classification_MIUA.

LRMR: LLM-Driven Relational Multi-node Ranking for Lymph Node Metastasis Assessment in Rectal Cancer

Yaoxian Dong, Yifan Gao, Haoyue Li, Yanfen Cui, Xin Gao

arxiv logopreprintJul 15 2025
Accurate preoperative assessment of lymph node (LN) metastasis in rectal cancer guides treatment decisions, yet conventional MRI evaluation based on morphological criteria shows limited diagnostic performance. While some artificial intelligence models have been developed, they often operate as black boxes, lacking the interpretability needed for clinical trust. Moreover, these models typically evaluate nodes in isolation, overlooking the patient-level context. To address these limitations, we introduce LRMR, an LLM-Driven Relational Multi-node Ranking framework. This approach reframes the diagnostic task from a direct classification problem into a structured reasoning and ranking process. The LRMR framework operates in two stages. First, a multimodal large language model (LLM) analyzes a composite montage image of all LNs from a patient, generating a structured report that details ten distinct radiological features. Second, a text-based LLM performs pairwise comparisons of these reports between different patients, establishing a relative risk ranking based on the severity and number of adverse features. We evaluated our method on a retrospective cohort of 117 rectal cancer patients. LRMR achieved an area under the curve (AUC) of 0.7917 and an F1-score of 0.7200, outperforming a range of deep learning baselines, including ResNet50 (AUC 0.7708). Ablation studies confirmed the value of our two main contributions: removing the relational ranking stage or the structured prompting stage led to a significant performance drop, with AUCs falling to 0.6875 and 0.6458, respectively. Our work demonstrates that decoupling visual perception from cognitive reasoning through a two-stage LLM framework offers a powerful, interpretable, and effective new paradigm for assessing lymph node metastasis in rectal cancer.

Assessing MRI-based Artificial Intelligence Models for Preoperative Prediction of Microvascular Invasion in Hepatocellular Carcinoma: A Systematic Review and Meta-analysis.

Han X, Shan L, Xu R, Zhou J, Lu M

pubmed logopapersJul 15 2025
To evaluate the performance of magnetic resonance imaging (MRI)-based artificial intelligence (AI) in the preoperative prediction of microvascular invasion (MVI) in patients with hepatocellular carcinoma (HCC). A systematic search of PubMed, Embase, and Web of Science was conducted up to May 2025, following PRISMA guidelines. Studies using MRI-based AI models with histopathologically confirmed MVI were included. Study quality was assessed using the Quality Assessment of Diagnostic Accuracy Studies-2 (QUADAS-2) tool and the Grading of Recommendations Assessment, Development and Evaluation (GRADE) framework. Statistical synthesis used bivariate random-effects models. Twenty-nine studies were included, totaling 2838 internal and 1161 external validation cases. Pooled internal validation showed a sensitivity of 0.81 (95% CI: 0.76-0.85), specificity of 0.82 (95% CI: 0.78-0.85), diagnostic odds ratio (DOR) of 19.33 (95% CI: 13.15-28.42), and area under the curve (AUC) of 0.88 (95% CI: 0.85-0.91). External validation yielded a comparable AUC of 0.85. Traditional machine learning methods achieved higher sensitivity than deep learning approaches in both internal and external validation cohorts (both P < 0.05). Studies incorporating both radiomics and clinical features demonstrated superior sensitivity and specificity compared to radiomics-only models (P < 0.01). MRI-based AI demonstrates high performance for preoperative prediction of MVI in HCC, particularly for MRI-based models that combine multimodal imaging and clinical variables. However, substantial heterogeneity and low GRADE levels may affect the strength of the evidence, highlighting the need for methodological standardization and multicenter prospective validation to ensure clinical applicability.

Identification of high-risk hepatoblastoma in the CHIC risk stratification system based on enhanced CT radiomics features.

Yang Y, Si J, Zhang K, Li J, Deng Y, Wang F, Liu H, He L, Chen X

pubmed logopapersJul 15 2025
Survival of patients with high-risk hepatoblastoma remains low, and early identification of high-risk hepatoblastoma is critical. To investigate the clinical value of contrast-enhanced computed tomography (CECT) radiomics in predicting high-risk hepatoblastoma. Clinical and CECT imaging data were retrospectively collected from 162 children who were treated at our hospital and pathologically diagnosed with hepatoblastoma. Patients were categorized into high-risk and non-high-risk groups according to the Children's Hepatic Tumors International Collaboration - Hepatoblastoma Study (CHIC-HS). Subsequently, these cases were randomized into training and test groups in a ratio of 7:3. The region of interest (ROI) was first outlined in the pre-treatment venous images, and subsequently the best features were extracted and filtered, and the radiomics model was built by three machine learning methods: namely, Bagging Decision Tree (BDT), Logistic Regression (LR), and Stochastic Gradient Descent (SGD). The AUC, 95 % CI, and accuracy of the model were calculated, and the model performance was evaluated by the DeLong test. The AUCs of the Bagging decision tree model were 0.966 (95 % CI: 0.938-0.994) and 0.875 (95 % CI: 0.77-0.98) for the training and test sets, respectively, with accuracies of 0.841 and 0.816,respectively. The logistic regression model has AUCs of 0.901 (95 % CI: 0.839-0.963) and 0.845 (95 % CI: 0.721-0.968) for the training and test sets, with accuracies of 0.788 and 0.735, respectively. The stochastic gradient descent model has AUCs of 0.788 (95 % CI: 0.712 -0.863) and 0.742 (95 % CI: 0.627-0.857) with accuracies of 0.735 and 0.653, respectively. CECT-based imaging histology identifies high-risk hepatoblastomas and may provide additional imaging biomarkers for identifying high-risk hepatoblastomas.
Page 6 of 1321311 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.