Sort by:
Page 126 of 3453445 results

An integrated strategy based on radiomics and quantum machine learning: diagnosis and clinical interpretation of pulmonary ground-glass nodules.

Huang X, Xu F, Zhu W, Yao L, He J, Su J, Zhao W, Hu H

pubmed logopapersJul 11 2025
Accurate classification of pulmonary pure ground-glass nodules (pGGNs) is essential for distinguishing invasive adenocarcinoma (IVA) from adenocarcinoma in situ (AIS) and minimally invasive adenocarcinoma (MIA), which significantly influences treatment decisions. This study aims to develop a high-precision integrated strategy by combining radiomics-based feature extraction, Quantum Machine Learning (QML) models, and SHapley Additive exPlanations (SHAP) analysis to improve diagnostic accuracy and interpretability in pGGN classification. A total of 322 pGGNs from 275 patients were retrospectively analyzed. The CT images was randomly divided into training and testing cohorts (80:20), with radiomic features extracted from the training cohort. Three QML models-Quantum Support Vector Classifier (QSVC), Pegasos QSVC, and Quantum Neural Network (QNN)-were developed and compared with a classical Support Vector Machine (SVM). SHAP analysis was applied to interpret the contribution of radiomic features to the models' predictions. All three QML models outperformed the classical SVM, with the QNN model achieving the highest improvements ([Formula: see text]) in classification metrics, including accuracy (89.23%, 95% CI: 81.54% - 95.38%), sensitivity (96.55%, 95% CI: 89.66% - 100.00%), specificity (83.33%, 95% CI: 69.44% - 94.44%), and area under the curve (AUC) (0.937, 95% CI: 0.871 - 0.983), respectively. SHAP analysis identified Low Gray Level Run Emphasis (LGLRE), Gray Level Non-uniformity (GLN), and Size Zone Non-uniformity (SZN) as the most critical features influencing classification. This study demonstrates that the proposed integrated strategy, combining radiomics, QML models, and SHAP analysis, significantly enhances the accuracy and interpretability of pGGN classification, particularly in small-sample datasets. It offers a promising tool for early, non-invasive lung cancer diagnosis and helps clinicians make more informed treatment decisions. Not applicable.

Interpretable MRI Subregional Radiomics-Deep Learning Model for Preoperative Lymphovascular Invasion Prediction in Rectal Cancer: A Dual-Center Study.

Huang T, Zeng Y, Jiang R, Zhou Q, Wu G, Zhong J

pubmed logopapersJul 11 2025
Develop a fusion model based on explainable machine learning, combining multiparametric MRI subregional radiomics and deep learning, to preoperatively predict the lymphovascular invasion status in rectal cancer. We collected data from RC patients with histopathological confirmation from two medical centers, with 301 patients used as a training set and 75 patients as an external validation set. Using K-means clustering techniques, we meticulously divided the tumor areas into multiple subregions and extracted crucial radiomic features from them. Additionally, we employed an advanced Vision Transformer (ViT) deep learning model to extract features. These features were integrated to construct the SubViT model. To better understand the decision-making process of the model, we used the Shapley Additive Properties (SHAP) tool to evaluate the model's interpretability. Finally, we comprehensively assessed the performance of the SubViT model through receiver operating characteristic (ROC) curves, decision curve analysis (DCA), and the Delong test, comparing it with other models. In this study, the SubViT model demonstrated outstanding predictive performance in the training set, achieving an area under the curve (AUC) of 0.934 (95% confidence interval: 0.9074 to 0.9603). It also performed well in the external validation set, with an AUC of 0.884 (95% confidence interval: 0.8055 to 0.9616), outperforming both subregion radiomics and imaging-based models. Furthermore, decision curve analysis (DCA) indicated that the SubViT model provides higher clinical utility compared to other models. As an advanced composite model, the SubViT model demonstrated its efficiency in the non-invasive assessment of local vascular invasion (LVI) in rectal cancer.

Breast lesion classification via colorized mammograms and transfer learning in a novel CAD framework.

Hussein AA, Valizadeh M, Amirani MC, Mirbolouk S

pubmed logopapersJul 11 2025
Medical imaging sciences and diagnostic techniques for Breast Cancer (BC) imaging have advanced tremendously, particularly with the use of mammography images; however, radiologists may still misinterpret medical images of the breast, resulting in limitations and flaws in the screening process. As a result, Computer-Aided Design (CAD) systems have become increasingly popular due to their ability to operate independently of human analysis. Current CAD systems use grayscale analysis, which lacks the contrast needed to differentiate benign from malignant lesions. As part of this study, an innovative CAD system is presented that transforms standard grayscale mammography images into RGB colored through a three-path preprocessing framework developed for noise reduction, lesion highlighting, and tumor-centric intensity adjustment using a data-driven transfer function. In contrast to a generic approach, this approach statistically tailors colorization in order to emphasize malignant regions, thus enhancing the ability of both machines and humans to recognize cancerous areas. As a consequence of this conversion, breast tumors with anomalies become more visible, which allows us to extract more accurate features about them. In a subsequent step, Machine Learning (ML) algorithms are employed to classify these tumors as malign or benign cases. A pre-trained model is developed to extract comprehensive features from colored mammography images by employing this approach. A variety of techniques are implemented in the pre-processing section to minimize noise and improve image perception; however, the most challenging methodology is the application of creative techniques to adjust pixels' intensity values in mammography images using a data-driven transfer function derived from tumor intensity histograms. This adjustment serves to draw attention to tumors while reducing the brightness of other areas in the breast image. Measuring criteria such as accuracy, sensitivity, specificity, precision, F1-Score, and Area Under the Curve (AUC) are used to evaluate the efficacy of the employed methodologies. This work employed and tested a variety of pre-training and ML techniques. However, the combination of EfficientNetB0 pre-training with ML Support Vector Machines (SVM) produced optimal results with accuracy, sensitivity, specificity, precision, F1-Score, and AUC, of 99.4%, 98.7%, 99.1%, 99%, 98.8%, and 100%, respectively. It is clear from these results that the developed method does not only advance the state-of-the-art in technical terms, but also provides radiologists with a practical tool to aid in the reduction of diagnostic errors and increase the detection of early breast cancer.

Machine Learning-Assisted Multimodal Early Screening of Lung Cancer Based on a Multiplexed Laser-Induced Graphene Immunosensor.

Cai Y, Ke L, Du A, Dong J, Gai Z, Gao L, Yang X, Han H, Du M, Qiang G, Wang L, Wei B, Fan Y, Wang Y

pubmed logopapersJul 11 2025
Lung cancer remains the leading cause of cancer-related mortality worldwide, largely due to late-stage diagnosis. Early detection is critical for improving patient outcomes, yet current screening methods, such as low-dose computed tomography (CT), often lack the sensitivity and specificity required for early-stage detection. Here, we present a multimodal early screening platform that integrates a multiplexed laser-induced graphene (LIG) immunosensor with machine learning to enhance the accuracy of lung cancer diagnosis. Our platform enables the rapid, cost-effective, and simultaneous detection of four tumor markers─neuron-specific enolase (NSE), carcinoembryonic antigen (CEA), p53, and SOX2─with limits of detection (LOD) as low as 1.62 pg/mL. By combining proteomic data from the immunosensor with deep learning-based CT imaging features and clinical data, we developed a multimodal predictive model that achieves an area under the curve (AUC) of 0.936, significantly outperforming single-modality approaches. This platform offers a transformative solution for early lung cancer screening, particularly in resource-limited settings, and provides potential technical support for precision medicine in oncology.

Explainable artificial intelligence for pneumonia classification: Clinical insights into deformable prototypical part network in pediatric chest x-ray images.

Yazdani E, Neizehbaz A, Karamzade-Ziarati N, Kheradpisheh SR

pubmed logopapersJul 11 2025
Pneumonia detection in chest X-rays (CXR) increasingly relies on AI-driven diagnostic systems. However, their "black-box" nature often lacks transparency, underscoring the need for interpretability to improve patient outcomes. This study presents the first application of the Deformable Prototypical Part Network (D-ProtoPNet), an ante-hoc interpretable deep learning (DL) model, for pneumonia classification in pediatric patients' CXR images. Clinical insights were integrated through expert radiologist evaluation of the model's learned prototypes and activated image patches, ensuring that explanations aligned with medically meaningful features. The model was developed and tested on a retrospective dataset of 5,856 CXR images of pediatric patients, ages 1-5 years. The images were originally acquired at a tertiary academic medical center as part of routine clinical care and were publicly hosted on a Kaggle platform. This dataset comprised anterior-posterior images labeled normal, viral, and bacterial. It was divided into 80 % training and 20 % validation splits, and utilised in a supervised five-fold cross-validation. Performance metrics were compared with the original ProtoPNet, utilising ResNet50 as the base model. An experienced radiologist assessed the clinical relevance of the learned prototypes, patch activations, and model explanations. The D-ProtoPNet achieved an accuracy of 86 %, precision of 86 %, recall of 85 %, and AUC of 93 %, marking a 3 % improvement over the original ProtoPNet. While further optimisation is required before clinical use, the radiologist praised D-ProtoPNet's intuitive explanations, highlighting its interpretability and potential to aid clinical decision-making. Prototypical part learning offers a balance between classification performance and explanation quality, but requires improvements to match the accuracy of black-box models. This study underscores the importance of integrating domain expertise during model evaluation to ensure the interpretability of XAI models is grounded in clinically valid insights.

Performance of Radiomics and Deep Learning Models in Predicting Distant Metastases in Soft Tissue Sarcomas: A Systematic Review and Meta-analysis.

Mirghaderi P, Valizadeh P, Haseli S, Kim HS, Azhideh A, Nyflot MJ, Schaub SK, Chalian M

pubmed logopapersJul 11 2025
Predicting distant metastases in soft tissue sarcomas (STS) is vital for guiding clinical decision-making. Recent advancements in radiomics and deep learning (DL) models have shown promise, but their diagnostic accuracy remains unclear. This meta-analysis aims to assess the performance of radiomics and DL-based models in predicting metastases in STS by analyzing pooled sensitivity and specificity. Following PRISMA guidelines, a thorough search was conducted in PubMed, Web of Science, and Embase. A random-effects model was used to estimate the pooled area under the curve (AUC), sensitivity, and specificity. Subgroup analyses were performed based on imaging modality (MRI, PET, PET/CT), feature extraction method (DL radiomics [DLR] vs. handcrafted radiomics [HCR]), incorporation of clinical features, and dataset used. Heterogeneity by I² statistic, leave-one-out sensitivity analyses, and publication bias by Egger's test assessed model robustness and potential biases. Ninetheen studies involving 1712 patients were included. The pooled AUC for predicting metastasis was 0.88 (95% CI: 0.80-0.92). The pooled AUC values were 88% (95% CI: 77-89%) for MRI-based models, 80% (95% CI: 76-92%) for PET-based models, and 91% (95% CI: 78-93%) for PET/CT-based models, with no significant differences (p = 0.75). DL-based models showed significantly higher sensitivity than HCR models (p < 0.01). Including clinical features did not significantly improve model performance (AUC: 0.90 vs. 0.88, p = 0.99). Significant heterogeneity was noted (I² > 25%), and Egger's test suggested potential publication bias (p < 0.001). Radiomics models showed promising potential for predicting metastases in STSs, with DL approaches outperforming traditional HCR. While integrating this approach into routine clinical practice is still evolving, it can aid physicians in identifying high-risk patients and implementing targeted monitoring strategies to reduce the risk of severe complications associated with metastasis. However, challenges such as heterogeneity, limited external validation, and potential publication bias persist. Future research should concentrate on standardizing imaging protocols and conducting multi-center validation studies to improve the clinical applicability of radiomics predictive models.

Enhanced Detection of Prostate Cancer Lesions on Biparametric MRI Using Artificial Intelligence: A Multicenter, Fully-crossed, Multi-reader Multi-case Trial.

Xing Z, Chen J, Pan L, Huang D, Qiu Y, Sheng C, Zhang Y, Wang Q, Cheng R, Xing W, Ding J

pubmed logopapersJul 11 2025
To assess artificial intelligence (AI)'s added value in detecting prostate cancer lesions on MRI by comparing radiologists' performance with and without AI assistance. A fully-crossed multi-reader multi-case clinical trial was conducted across three institutions with 10 non-expert radiologists. Biparametric MRI cases comprising T2WI, diffusion-weighted images, and apparent diffusion coefficient were retrospectively collected. Three reading modes were evaluated: AI alone, radiologists alone (unaided), and radiologists with AI (aided). Aided and unaided readings were compared using the Dorfman-Berbaum-Metz method. Reference standards were established by senior radiologists based on pathological reports. Performance was quantified via sensitivity, specificity, and area under the alternative free-response receiver operating characteristic curve (AFROC-AUC). Among 407 eligible male patients (69.5±9.3years), aided reading significantly improved lesion-level sensitivity from 67.3% (95% confidence intervals [CI]: 58.8%, 75.8%) to 85.5% (95% CI: 81.3%, 89.7%), with a substantial difference of 18.2% (95% CI: 10.7%, 25.7%, p<0.001). Case-level specificity increased from 75.9% (95% CI: 68.7%, 83.1%) to 79.5% (95% CI: 74.1%, 84.8%), demonstrating non-inferiority (p<0.001). AFROC-AUC was also higher for aided than unaided reading (86.9% vs 76.1%, p<0.001). AI alone achieved robust performance (AFROC-AUC=83.1%, 95%CI: 79.7%, 86.6%), with lesion-level sensitivity of 88.4% (95% CI: 84.0%, 92.0%) and case-level specificity of 77.8% (95% CI: 71.5%, 83.3%). Subgroup analysis revealed improved detection for lesions with smaller size and lower prostate imaging reporting and data system scores. AI-aided reading significantly enhances lesion detection compared to unaided reading, while AI alone also demonstrates high diagnostic accuracy.

Impact of heart rate on coronary artery stenosis grading accuracy using deep learning-based fast kV-switching CT: A phantom study.

Mikayama R, Kojima T, Shirasaka T, Yamane S, Funatsu R, Kato T, Yabuuchi H

pubmed logopapersJul 11 2025
Deep learning-based fast kV-switching CT (DL-FKSCT) generates complete sinograms for fast kV-switching dual-energy CT (DECT) scans by using a trained neural network to restore missing views. Such restoration significantly enhances the image quality of coronary CT angiography (CCTA), and the allowable heart rate (HR) may vary between DECT and single-energy CT (SECT). This study aimed to examine HR's effect onCCTA using DL-FKSCT. We scanned stenotic coronary artery phantoms attached to a pulsating cardiac phantom with DECT and SECT modes on a DL-FKSCT scanner. The phantom unit was operated with simulated HRs ranging from 0 (static) to 50-70 beats per minute (bpm). The sharpness and stenosis ratio of the coronary model were quantitatively compared between DECT and SECT, stratified by simulated HR settings using the paired t-test (significance was set at p < 0.01 with a Bonferroni adjustment for multiple comparisons). Regarding image sharpness, DECT showed significant superiority over SECT. In terms of the stenosis ratio compared to a static image reference, 70 keV virtual monochromatic image in DECT exhibited errors exceeding 10 % at HRs surpassing 65 bpm (p < 0.01), whereas 120 kVp SECT registered errors below 10 % across all HR settings, with no significant differences observed. In DL-FKSCT, DECT exhibited a lower upper limit of HR than SECT. Therefore, HR control is important for DECT scans in DL-FKSCT.

Tiny-objective segmentation for spot signs on multi-phase CT angiography via contrastive learning with dynamic-updated positive-negative memory banks.

Zhang J, Horn M, Tanaka K, Bala F, Singh N, Benali F, Ganesh A, Demchuk AM, Menon BK, Qiu W

pubmed logopapersJul 11 2025
Presence of spot sign on CT Angiography (CTA) is associated with hematoma growth in patients with intracerebral hemorrhage. Measuring spot sign volume over time may aid to predict hematoma expansion. Due to the difficulties that imaging characteristics of spot sign are similar with vein and calcification and spot signs are tiny appeared in CTA images to detect, our aim is to develop an automated method to pick up spot signs accurately. We proposed a novel collaborative architecture of network based on a student-teacher model by efficiently exploiting additional negative samples with contrastive learning. In particular, a set of dynamic-updated memory banks is proposed to learn more distinctive features from the extremely imbalanced positive and negative samples. Alongside, a two-steam network with an additional contextual-decoder is designed for learning more contextual information at different scales in a collaborative way. Besides, to better inhibit the false positive detection rate, a region restriction loss function is further designed to confine the spot sign segmentation within the hemorrhage. Quantitative evaluations using dice, volume correlation, sensitivity, specificity, area under the curve show that the proposed method is able to segment and detect spot signs accurately. Our proposed contractive learning framework obtained the best segmentation performance regarding a mean Dice of 0.638 ± 0211, a mean VC of 0.871 and a mean VDP of 0.348 ± 0.237 and detection performance regarding sensitivity of 0.956 with CI(0.895,1.000), specificity of 0.833 with CI(0.766,0.900), and AUC of 0.892 with CI(0.888,0.896), outperforming nnuNet, cascade-nnuNet, nnuNet++, SegRegNet, UNETR and SwinUNETR. This paper proposed a novel segmentation approach that leverages contrastive learning to explore additional negative samples concurrently for the automatic segmentation of spot signs on mCTA images. The experimental results demonstrate the effectiveness of our method and highlight its potential applicability in clinical settings for measuring spot sign volumes.

Advancing Rare Neurological Disorder Diagnosis: Addressing Challenges with Systematic Reviews and AI-Driven MRI Meta-Trans Learning Framework for NeuroDegenerative Disorders.

Gupta A, Malhotra D

pubmed logopapersJul 11 2025
Neurological Disorders (ND) affect a large portion of the global population, impacting the brain, spinal cord, and nerves. These disorders fall into categories such as NeuroDevelopmental (NDD), NeuroBiological (NBD), and NeuroDegenerative (ND<sub>e</sub>) disorders, which range from common to rare conditions. While Artificial Intelligence (AI) has advanced healthcare diagnostics, training Machine Learning (ML) and Deep Learning (DL) models for early detection of rare neurological disorders remains a challenge due to limited patient data. This data scarcity poses a significant public health issue. Meta_Trans Learning (M<sub>TA</sub>L), which integrates Meta-Learning (M<sub>t</sub>L) and Transfer Learning (TL), offers a promising solution by leveraging small datasets to extract expert patterns, generalize findings, and reduce AI bias in healthcare. This research systematically reviews studies from 2018 to 2024 to explore how ML and M<sub>TA</sub>L techniques are applied in diagnosing NDD, NBD, and ND<sub>e</sub> disorders. It also provides statistical and parametric analysis of ML and DL methods for neurological disorder diagnosis. Lastly, the study introduces a MRI-based ND<sub>e</sub>-M<sub>TA</sub>L framework to aid healthcare professionals in early detection of rare neuro disorders, aiming to enhance diagnostic accuracy and advance healthcare practices.
Page 126 of 3453445 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.