Sort by:
Page 121 of 2412410 results

An integrated strategy based on radiomics and quantum machine learning: diagnosis and clinical interpretation of pulmonary ground-glass nodules.

Huang X, Xu F, Zhu W, Yao L, He J, Su J, Zhao W, Hu H

pubmed logopapersJul 11 2025
Accurate classification of pulmonary pure ground-glass nodules (pGGNs) is essential for distinguishing invasive adenocarcinoma (IVA) from adenocarcinoma in situ (AIS) and minimally invasive adenocarcinoma (MIA), which significantly influences treatment decisions. This study aims to develop a high-precision integrated strategy by combining radiomics-based feature extraction, Quantum Machine Learning (QML) models, and SHapley Additive exPlanations (SHAP) analysis to improve diagnostic accuracy and interpretability in pGGN classification. A total of 322 pGGNs from 275 patients were retrospectively analyzed. The CT images was randomly divided into training and testing cohorts (80:20), with radiomic features extracted from the training cohort. Three QML models-Quantum Support Vector Classifier (QSVC), Pegasos QSVC, and Quantum Neural Network (QNN)-were developed and compared with a classical Support Vector Machine (SVM). SHAP analysis was applied to interpret the contribution of radiomic features to the models' predictions. All three QML models outperformed the classical SVM, with the QNN model achieving the highest improvements ([Formula: see text]) in classification metrics, including accuracy (89.23%, 95% CI: 81.54% - 95.38%), sensitivity (96.55%, 95% CI: 89.66% - 100.00%), specificity (83.33%, 95% CI: 69.44% - 94.44%), and area under the curve (AUC) (0.937, 95% CI: 0.871 - 0.983), respectively. SHAP analysis identified Low Gray Level Run Emphasis (LGLRE), Gray Level Non-uniformity (GLN), and Size Zone Non-uniformity (SZN) as the most critical features influencing classification. This study demonstrates that the proposed integrated strategy, combining radiomics, QML models, and SHAP analysis, significantly enhances the accuracy and interpretability of pGGN classification, particularly in small-sample datasets. It offers a promising tool for early, non-invasive lung cancer diagnosis and helps clinicians make more informed treatment decisions. Not applicable.

[MP-MRI in the evaluation of non-operative treatment response, for residual and recurrent tumor detection in head and neck cancer].

Gődény M

pubmed logopapersJul 11 2025
As non-surgical therapies gain acceptance in head and neck tumors, the importance of imaging has increased. New therapeutic methods (in radiation therapy, targeted biological therapy, immunotherapy) need better tumor characterization and prognostic information along with the accurate anatomy. Magnetic resonance imaging (MRI) has become the gold standard in head and neck cancer evaluation not only for staging but also for assessing tumor response, posttreatment status and complications, as well as for finding residual or recurrent tumor. Multiparametric anatomical and functional MRI (MP-MRI) is a true cancer imaging biomarker providing, in addition to high resolution tumor anatomy, more molecular and functional, qualitative and quantitative data using diffusion- weighted MRI (DW-MRI) and perfusion-dynamic contrast enhanced MRI (P-DCE-MRI), can improve the assessment of biological target volume and determine treatment response. DW-MRI provides information at the cellular level about the cell density and the integrity of the plasma membrane, based on water movement. P-DCE-MRI provides useful hemodynamic information about tissue vascularity and vascular permeability. Recent studies have shown promising results using radiomics features, MP-MRI has opened new perspectives in oncologic imaging with better realization of the latest technological advances with the help of artificial intelligence.

A novel artificial Intelligence-Based model for automated Lenke classification in adolescent idiopathic scoliosis.

Xie K, Zhu S, Lin J, Li Y, Huang J, Lei W, Yan Y

pubmed logopapersJul 11 2025
To develop an artificial intelligence (AI)-driven model for automatic Lenke classification of adolescent idiopathic scoliosis (AIS) and assess its performance. This retrospective study utilized 860 spinal radiographs from 215 AIS patients with four views, including 161 training sets and 54 testing sets. Additionally, 1220 spinal radiographs from 610 patients with only anterior-posterior (AP) and lateral (LAT) views were collected for training. The model was designed to perform keypoint detection, pedicle segmentation, and AIS classification based on a custom classification strategy. Its performance was evaluated against the gold standard using metrics such as mean absolute difference (MAD), intraclass correlation coefficient (ICC), Bland-Altman plots, Cohen's Kappa, and the confusion matrix. In comparison to the gold standard, the MAD for all predicted angles was 2.29°, with an excellent ICC. Bland-Altman analysis revealed minimal differences between the methods. For Lenke classification, the model exhibited exceptional consistency in curve type, lumbar modifier, and thoracic sagittal profile, with average Kappa values of 0.866, 0.845, and 0.827, respectively, and corresponding accuracy rates of 87.07%, 92.59%, and 92.59%. Subgroup analysis further confirmed the model's high consistency, with Kappa values ranging from 0.635 to 0.930, 0.672 to 0.926, and 0.815 to 0.847, and accuracy rates between 90.7 and 98.1%, 92.6-98.3%, and 92.6-98.1%, respectively. This novel AI system facilitates the rapid and accurate automatic Lenke classification, offering potential assistance to spinal surgeons.

Interpretable Artificial Intelligence for Detecting Acute Heart Failure on Acute Chest CT Scans

Silas Nyboe Ørting, Kristina Miger, Anne Sophie Overgaard Olesen, Mikael Ploug Boesen, Michael Brun Andersen, Jens Petersen, Olav W. Nielsen, Marleen de Bruijne

arxiv logopreprintJul 11 2025
Introduction: Chest CT scans are increasingly used in dyspneic patients where acute heart failure (AHF) is a key differential diagnosis. Interpretation remains challenging and radiology reports are frequently delayed due to a radiologist shortage, although flagging such information for emergency physicians would have therapeutic implication. Artificial intelligence (AI) can be a complementary tool to enhance the diagnostic precision. We aim to develop an explainable AI model to detect radiological signs of AHF in chest CT with an accuracy comparable to thoracic radiologists. Methods: A single-center, retrospective study during 2016-2021 at Copenhagen University Hospital - Bispebjerg and Frederiksberg, Denmark. A Boosted Trees model was trained to predict AHF based on measurements of segmented cardiac and pulmonary structures from acute thoracic CT scans. Diagnostic labels for training and testing were extracted from radiology reports. Structures were segmented with TotalSegmentator. Shapley Additive explanations values were used to explain the impact of each measurement on the final prediction. Results: Of the 4,672 subjects, 49% were female. The final model incorporated twelve key features of AHF and achieved an area under the ROC of 0.87 on the independent test set. Expert radiologist review of model misclassifications found that 24 out of 64 (38%) false positives and 24 out of 61 (39%) false negatives were actually correct model predictions, with the errors originating from inaccuracies in the initial radiology reports. Conclusion: We developed an explainable AI model with strong discriminatory performance, comparable to thoracic radiologists. The AI model's stepwise, transparent predictions may support decision-making.

Intratumoral and peritumoral radiomics based on 2D ultrasound imaging in breast cancer was used to determine the optimal peritumoral range for predicting KI-67 expression.

Huang W, Zheng S, Zhang X, Qi L, Li M, Zhang Q, Zhen Z, Yang X, Kong C, Li D, Hua G

pubmed logopapersJul 10 2025
Currently, radiomics focuses on intratumoral regions and fixed peritumoral regions, and lacks an optimal peritumoral region taken to predict KI-67 expression. The aim of this study was to develop a machine learning model to analyze ultrasound radiomics features with different regions of peri-tumor fetch values to determine the optimal peri-tumor region for predicting KI-67 expression. A total of 453 breast cancer patients were included. They were randomly assigned to training and validation sets in a 7:3 ratio. In the training cohort, machine learning models were constructed for intra-tumor and different peri-tumor regions (2 mm, 4 mm, 6 mm, 8 mm, 10 mm), identifying the relevant Ki-67 features for each ROI and comparing the different models to determine the best model. These models were validated using a test cohort to find the most accurate peri-tumor region for Ki-67 prediction. The area under the receiver operating characteristic curve (AUC) was used to evaluate the performance of predicting KI-67 expression, and the Delong test was used to assess the difference between each AUC.SHAP (Shapley Additive Decomposition) was performed to analyze the optimal prediction model and quantify the contribution of major radiomics features. In the validation cohort, the SVM model with the combination of intratumoral and peritumoral 6 mm regions showed the highest prediction effect, with an AUC of 0.9342.The intratumoral and peritumoral 6-mm SVM models showed statistically significant differences (P < 0.05) compared to the other models. SHAP analysis showed that peri-tumoral 6 mm features were more important than intratumoral features. SVM models using intratumoral and peritumoral 6 mm regions showed the best results in prediction of KI-67 expression.

MeD-3D: A Multimodal Deep Learning Framework for Precise Recurrence Prediction in Clear Cell Renal Cell Carcinoma (ccRCC)

Hasaan Maqsood, Saif Ur Rehman Khan

arxiv logopreprintJul 10 2025
Accurate prediction of recurrence in clear cell renal cell carcinoma (ccRCC) remains a major clinical challenge due to the disease complex molecular, pathological, and clinical heterogeneity. Traditional prognostic models, which rely on single data modalities such as radiology, histopathology, or genomics, often fail to capture the full spectrum of disease complexity, resulting in suboptimal predictive accuracy. This study aims to overcome these limitations by proposing a deep learning (DL) framework that integrates multimodal data, including CT, MRI, histopathology whole slide images (WSI), clinical data, and genomic profiles, to improve the prediction of ccRCC recurrence and enhance clinical decision-making. The proposed framework utilizes a comprehensive dataset curated from multiple publicly available sources, including TCGA, TCIA, and CPTAC. To process the diverse modalities, domain-specific models are employed: CLAM, a ResNet50-based model, is used for histopathology WSIs, while MeD-3D, a pre-trained 3D-ResNet18 model, processes CT and MRI images. For structured clinical and genomic data, a multi-layer perceptron (MLP) is used. These models are designed to extract deep feature embeddings from each modality, which are then fused through an early and late integration architecture. This fusion strategy enables the model to combine complementary information from multiple sources. Additionally, the framework is designed to handle incomplete data, a common challenge in clinical settings, by enabling inference even when certain modalities are missing.

An Enhanced Privacy-preserving Federated Few-shot Learning Framework for Respiratory Disease Diagnosis

Ming Wang, Zhaoyang Duan, Dong Xue, Fangzhou Liu, Zhongheng Zhang

arxiv logopreprintJul 10 2025
The labor-intensive nature of medical data annotation presents a significant challenge for respiratory disease diagnosis, resulting in a scarcity of high-quality labeled datasets in resource-constrained settings. Moreover, patient privacy concerns complicate the direct sharing of local medical data across institutions, and existing centralized data-driven approaches, which rely on amounts of available data, often compromise data privacy. This study proposes a federated few-shot learning framework with privacy-preserving mechanisms to address the issues of limited labeled data and privacy protection in diagnosing respiratory diseases. In particular, a meta-stochastic gradient descent algorithm is proposed to mitigate the overfitting problem that arises from insufficient data when employing traditional gradient descent methods for neural network training. Furthermore, to ensure data privacy against gradient leakage, differential privacy noise from a standard Gaussian distribution is integrated into the gradients during the training of private models with local data, thereby preventing the reconstruction of medical images. Given the impracticality of centralizing respiratory disease data dispersed across various medical institutions, a weighted average algorithm is employed to aggregate local diagnostic models from different clients, enhancing the adaptability of a model across diverse scenarios. Experimental results show that the proposed method yields compelling results with the implementation of differential privacy, while effectively diagnosing respiratory diseases using data from different structures, categories, and distributions.

Understanding Dataset Bias in Medical Imaging: A Case Study on Chest X-rays

Ethan Dack, Chengliang Dai

arxiv logopreprintJul 10 2025
Recent work has revisited the infamous task Name that dataset and established that in non-medical datasets, there is an underlying bias and achieved high Accuracies on the dataset origin task. In this work, we revisit the same task applied to popular open-source chest X-ray datasets. Medical images are naturally more difficult to release for open-source due to their sensitive nature, which has led to certain open-source datasets being extremely popular for research purposes. By performing the same task, we wish to explore whether dataset bias also exists in these datasets. % We deliberately try to increase the difficulty of the task by dataset transformations. We apply simple transformations of the datasets to try to identify bias. Given the importance of AI applications in medical imaging, it's vital to establish whether modern methods are taking shortcuts or are focused on the relevant pathology. We implement a range of different network architectures on the datasets: NIH, CheXpert, MIMIC-CXR and PadChest. We hope this work will encourage more explainable research being performed in medical imaging and the creation of more open-source datasets in the medical domain. The corresponding code will be released upon acceptance.

Recurrence prediction of invasive ductal carcinoma from preoperative contrast-enhanced computed tomography using deep convolutional neural network.

Umezu M, Kondo Y, Ichikawa S, Sasaki Y, Kaneko K, Ozaki T, Koizumi N, Seki H

pubmed logopapersJul 10 2025
Predicting the risk of breast cancer recurrence is crucial for guiding therapeutic strategies, including enhanced surveillance and the consideration of additional treatment after surgery. In this study, we developed a deep convolutional neural network (DCNN) model to predict recurrence within six years after surgery using preoperative contrast-enhanced computed tomography (CECT) images, which are widely available and effective for detecting distant metastases. This retrospective study included preoperative CECT images from 133 patients with invasive ductal carcinoma. The images were classified into recurrence and no-recurrence groups using ResNet-101 and DenseNet-201. Classification performance was evaluated using the area under the receiver operating curve (AUC) with leave-one-patient-out cross-validation. At the optimal threshold, the classification accuracies for ResNet-101 and DenseNet-201 were 0.73 and 0.72, respectively. The median (interquartile range) AUC of DenseNet-201 (0.70 [0.69-0.72]) was statistically higher than that of ResNet-101 (0.68 [0.66-0.68]) (p < 0.05). These results suggest the potential of preoperative CECT-based DCNN models to predict breast cancer recurrence without the need for additional invasive procedures.

A deep learning-based clinical decision support system for glioma grading using ensemble learning and knowledge distillation.

Liu Y, Shi Z, Xiao C, Wang B

pubmed logopapersJul 10 2025
Gliomas are the most common malignant primary brain tumors, and grading their severity, particularly the diagnosis of low-grade gliomas, remains a challenging task for clinicians and radiologists. With advancements in deep learning and medical image processing technologies, the development of Clinical Decision Support Systems (CDSS) for glioma grading offers significant benefits for clinical treatment. This study proposes a CDSS for glioma grading, integrating a novel feature extraction framework. The method is based on combining ensemble learning and knowledge distillation: teacher models were constructed through ensemble learning, while uncertainty-weighted ensemble averaging is applied during student model training to refine knowledge transfer. This approach bridges the teacher-student performance gap, enhancing grading accuracy, reliability, and clinical applicability with lightweight deployment. Experimental results show 85.96 % Accuracy (5.2 % improvement over baseline), with Precision (83.90 %), Recall (87.40 %), and F1-score (83.90 %) increasing by 7.5 %, 5.1 %, and 5.1 % respectively. The teacher-student performance gap is reduced to 3.2 %, confirming effectiveness. Furthermore, the developed CDSS not only ensures rapid and accurate glioma grading but also includes critical features influencing the grading results, seamlessly integrating a methodology for generating comprehensive diagnostic reports. Consequently, the glioma grading CDSS represents a practical clinical decision support tool capable of delivering accurate and efficient auxiliary diagnostic decisions for physicians and patients.
Page 121 of 2412410 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.