Sort by:
Page 62 of 2372364 results

A machine learning-based decision support tool for standardizing intracavitary versus interstitial brachytherapy technique selection in high-dose-rate cervical cancer.

Kajikawa T, Masui K, Sakai K, Takenaka T, Suzuki G, Yoshino Y, Nemoto H, Yamazaki H, Yamada K

pubmed logopapersAug 20 2025
To develop and evaluate a machine-learning (ML) decision-support tool that standardizes selection of intracavitary brachytherapy (ICBT) versus hybrid intracavitary/interstitial brachytherapy (IC/ISBT) in high-dose-rate (HDR) cervical cancer. We retrospectively analyzed 159 HDR brachytherapy plans from 50 consecutive patients treated between April 2022 and June 2024. Brachytherapy techniques (ICBT or IC/ISBT) were determined by an experienced radiation oncologist using CT/MRI-based 3-D image-guided brachytherapy. For each plan, 144 shape- and distance-based geometric features describing the high-risk clinical target volume (HR-CTV), bladder, rectum, and applicator were extracted. Nested five-fold cross-validation combined minimum-redundancy-maximum-relevance feature selection with five classifiers (k-nearest neighbors, logistic regression, naïve Bayes, random forest, support-vector classifier) and two voting ensembles (hard and soft voting). Model performance was benchmarked against single-factor rules (HR-CTV > 30 cm³; maximum lateral HR-CTV-tandem distance > 25 mm). Logistic regression achieved the highest test accuracy 0.849 ± 0.023 and a mean area-under-the-curve (AUC) 0.903 ± 0.033, outperforming the volume rule and matching the distance rule's AUC 0.907 ± 0.057 while providing greater accuracy 0.805 ± 0.114. These differences were not statistically significant. Feature-importance analysis showed that the maximum HR-CTV-tandem lateral distance and the bladder's minimal short-axis length consistently dominated model decisions.​ CONCLUSIONS: A compact ML tool using two readily measurable geometric features can reliably assist clinicians in choosing between ICBT and IC/ISBT, thereby reducing inter-physician variability and promoting standardized HDR cervical brachytherapy technique selection.

Machine learning-assisted radiogenomic analysis for miR-15a expression prediction in renal cell carcinoma.

Mytsyk Y, Kowal P, Kobilnyk Y, Lesny M, Skrzypczyk M, Stroj D, Dosenko V, Kucheruk O

pubmed logopapersAug 20 2025
Renal cell carcinoma (RCC) is a prevalent malignancy with highly variable outcomes. MicroRNA-15a (miR-15a) has emerged as a promising prognostic biomarker in RCC, linked to angiogenesis, apoptosis, and proliferation. Radiogenomics integrates radiological features with molecular data to non-invasively predict biomarkers, offering valuable insights for precision medicine. This study aimed to develop a machine learning-assisted radiogenomic model to predict miR-15a expression in RCC. A retrospective analysis was conducted on 64 RCC patients who underwent preoperative multiphase contrast-enhanced CT or MRI. Radiological features, including tumor size, necrosis, and nodular enhancement, were evaluated. MiR-15a expression was quantified using real-time qPCR from archived tissue samples. Polynomial regression and Random Forest models were employed for prediction, and hierarchical clustering with K-means analysis was used for phenotypic stratification. Statistical significance was assessed using non-parametric tests and machine learning performance metrics. Tumor size was the strongest radiological predictor of miR-15a expression (adjusted R<sup>2</sup> = 0.8281, p < 0.001). High miR-15a levels correlated with aggressive features, including necrosis and nodular enhancement (p < 0.05), while lower levels were associated with cystic components and macroscopic fat. The Random Forest regression model explained 65.8% of the variance in miR-15a expression (R<sup>2</sup> = 0.658). For classification, the Random Forest classifier demonstrated exceptional performance, achieving an AUC of 1.0, a precision of 1.0, a recall of 0.9, and an F1-score of 0.95. Hierarchical clustering effectively segregated tumors into aggressive and indolent phenotypes, consistent with clinical expectations. Radiogenomic analysis using machine learning provides a robust, non-invasive approach to predicting miR-15a expression, enabling enhanced tumor stratification and personalized RCC management. These findings underscore the clinical utility of integrating radiological and molecular data, paving the way for broader adoption of precision medicine in oncology.

An effective flowchart for multimodal brain tumor binary classification with ranked 3D texture features.

Barstuğan M

pubmed logopapersAug 20 2025
Brain tumors have complex structures, and their shape, density, and size can vary widely. Consequently, their accurate classification, which involves identifying features that best describe the tumor data, is challenging. Using classical 2D texture features can yield only limited accuracy. Here, we show that this limitation can be overcome by using 3D feature extraction and ranking methods. Brain tumor images obtained through 3D magnetic resonance imaging were used to classify high-grade and low-grade glioma in the BraTS 2017 dataset. From the dataset, texture properties for each of the four phases (i.e., FLAIR, T1, T1c, and T2) were extracted using a 3D gray level co-occurrence matrix. Various combinations of brain tumor feature sets were created, and feature ranking methods-Bhattacharyya, entropy, receiver operating characteristic, the t-test, and the Wilcoxon test-were applied to them. Features were classified using gradient boosting, support vector machines (SVMs), and random forest methods. The performance of all combinations was evaluated from the sensitivity, specificity, accuracy, precision, and F-score obtained from twofold, fivefold, and tenfold cross-validation tests. In all experiments, the most effective scheme was that involving the quadruple combination (FLAIR + T1 + T1c + T2) and the entropy feature-ranking method with twofold cross-validation. Notably, the proposed machine-learning-based framework showed remarkable scores of 100% (sensitivity), 97.29% (specificity), 99.30% (accuracy), 99.07% (precision), and 99.53% (F-score) for glioma classification with an SVM. The proposed flowchart reflects a novel brain tumor classification system that competes with the novel methods.

Deep learning approach for screening neonatal cerebral lesions on ultrasound in China.

Lin Z, Zhang H, Duan X, Bai Y, Wang J, Liang Q, Zhou J, Xie F, Shentu Z, Huang R, Chen Y, Yu H, Weng Z, Ni D, Liu L, Zhou L

pubmed logopapersAug 20 2025
Timely and accurate diagnosis of severe neonatal cerebral lesions is critical for preventing long-term neurological damage and addressing life-threatening conditions. Cranial ultrasound is the primary screening tool, but the process is time-consuming and reliant on operator's proficiency. In this study, a deep-learning powered neonatal cerebral lesions screening system capable of automatically extracting standard views from cranial ultrasound videos and identifying cases with severe cerebral lesions is developed based on 8,757 neonatal cranial ultrasound images. The system demonstrates an area under the curve of 0.982 and 0.944, with sensitivities of 0.875 and 0.962 on internal and external video datasets, respectively. Furthermore, the system outperforms junior radiologists and performs on par with mid-level radiologists, with 55.11% faster examination efficiency. In conclusion, the developed system can automatically extract standard views and make correct diagnosis with efficiency from cranial ultrasound videos and might be useful to deploy in multiple application scenarios.

Differentiation of Suspicious Microcalcifications Using Deep Learning: DCIS or IDC.

Xu W, Deng S, Mao G, Wang N, Huang Y, Zhang C, Sa G, Wu S, An Y

pubmed logopapersAug 20 2025
To explore the value of a deep learning-based model in distinguishing between ductal carcinoma in situ (DCIS) and invasive ductal carcinoma (IDC) manifesting suspicious microcalcifications on mammography. A total of 294 breast cancer cases (106 DCIS and 188 IDC) from two centers were randomly allocated into training, internal validation and external validation sets in this retrospective study. Clinical variables differentiating DCIS from IDC were identified through univariate and multivariate analyses and used to build a clinical model. Deep learning features were extracted using Resnet101 and selected by minimum redundancy maximum correlation (mRMR) and least absolute shrinkage and selection operator (LASSO). A deep learning model was developed using deep learning features, and a combined model was constructed by combining these features with clinical variables. The area under the receiver operating characteristic curve (AUC) was used to assess the performance of each model. Multivariate logistic regression identified lesion type and BI-RADS category as independent predictors for differentiating DCIS from IDC. The clinical model incorporating these factors achieved an AUC of 0.67, sensitivity of 0.53, specificity of 0.81, and accuracy of 0.63 in the external validation set. In comparison, the deep learning model showed an AUC of 0.97, sensitivity of 0.94 and specificity of 0.92, accuracy of 0.93. For the combined model, the AUC, sensitivity, specificity and accuracy were 0.97, 0.96, 0.92 and 0.95, respectively. The diagnostic efficacy of the deep learning model and combined model was comparable (p>0.05), and both models outperformed the clinical model (p<0.05). Deep learning provides an effective non-invasive approach to differentiate DCIS from IDC presenting as suspicious microcalcifications on mammography.

Applying large language model for automated quality scoring of radiology requisitions using a standardized criteria.

Büyüktoka RE, Surucu M, Erekli Derinkaya PB, Adibelli ZH, Salbas A, Koc AM, Buyuktoka AD, Isler Y, Ugur MA, Isiklar E

pubmed logopapersAug 20 2025
To create and test a locally adapted large language model (LLM) for automated scoring of radiology requisitions based on the reason for exam imaging reporting and data system (RI-RADS), and to evaluate its performance based on reference standards. This retrospective, double-center study included 131,683 radiology requisitions from two institutions. A bidirectional encoder representation from a transformer (BERT)-based model was trained using 101,563 requisitions from Center 1 (including 1500 synthetic examples) and externally tested on 18,887 requisitions from Center 2. The model's performance for two different classification strategies was evaluated by the reference standard created by three different radiologists. Model performance was assessed using Cohen's Kappa, accuracy, F1-score, sensitivity, and specificity with 95% confidence intervals. A total of 18,887 requisitions were evaluated for the external test set. External testing yielded a performance with an F1-score of 0.93 (95% CI: 0.912-0.943); κ = 0.88 (95% CI: 0.871-0.884). Performance was highest in common categories RI-RADS D and X (F1 ≥ 0.96) and lowest for rare categories RI-RADS A and B (F1 ≤ 0.49). When grouped into three categories (adequate, inadequate, and unacceptable), overall model performance improved [F1-score = 0.97; (95% CI: 0.96-0.97)]. The locally adapted BERT-based model demonstrated high performance and almost perfect agreement with radiologists in automated RI-RADS scoring, showing promise for integration into radiology workflows to improve requisition completeness and communication. Question Can an LLM accurately and automatically score radiology requisitions based on standardized criteria to address the challenges of incomplete information in radiological practice? Findings A locally adapted BERT-based model demonstrated high performance (F1-score 0.93) and almost perfect agreement with radiologists in automated RI-RADS scoring across a large, multi-institutional dataset. Clinical relevance LLMs offer a scalable solution for automated scoring of radiology requisitions, with the potential to improve workflow in radiology. Further improvement and integration into clinical practice could enhance communication, contributing to better diagnoses and patient care.

[Digital and intelligent medicine empowering precision abdominal surgery:today and the future].

Dong Q, Wang JM, Xiu WL

pubmed logopapersAug 20 2025
The complex anatomical structure of abdominal organs demands high precision in surgical procedures, which also increases postoperative complication risks. Advancements in digital medicine have created new opportunities for precision surgery. This article summarizes the current applications of digital intelligence in precision abdominal surgery. The processing and real-time monitoring technologies of medical imaging provide powerful tools for accurate diagnosis and treatment. Meanwhile, big data analysis and precise classification capabilities of artificial intelligence further enhance diagnostic efficiency and safety. Additionally, the paper analyzes the advantages and limitations of digital intelligence in empowering precision abdominal surgery, while exploring future development directions.

Characterizing the Impact of Training Data on Generalizability: Application in Deep Learning to Estimate Lung Nodule Malignancy Risk.

Obreja B, Bosma J, Venkadesh KV, Saghir Z, Prokop M, Jacobs C

pubmed logopapersAug 20 2025
<i>"Just Accepted" papers have undergone full peer review and have been accepted for publication in <i>Radiology: Artificial Intelligence</i>. This article will undergo copyediting, layout, and proof review before it is published in its final version. Please note that during production of the final copyedited article, errors may be discovered which could affect the content</i>. Purpose To investigate the relationship between training data volume and performance of a deep learning AI algorithm developed to assess the malignancy risk of pulmonary nodules detected on low-dose CT scans in lung cancer screening. Materials and Methods This retrospective study used a dataset of 16077 annotated nodules (1249 malignant, 14828 benign) from the National Lung Screening Trial (NLST) to systematically train an AI algorithm for pulmonary nodule malignancy risk prediction across various stratified subsets ranging from 1.25% to the full dataset. External testing was conducted using data from the Danish Lung Cancer Screening Trial (DLCST) to determine the amount of training data at which the performance of the AI was statistically non-inferior to the AI trained on the full NLST cohort. A size-matched cancer-enriched subset of DLCST, where each malignant nodule had been paired in diameter with the closest two benign nodules, was used to investigate the amount of training data at which the performance of the AI algorithm was statistically non-inferior to the average performance of 11 clinicians. Results The external testing set included 599 participants (mean age 57.65 (SD 4.84) for females and mean age 59.03 (SD 4.94) for males) with 883 nodules (65 malignant, 818 benign). The AI achieved a mean AUC of 0.92 [95% CI: 0.88, 0.96] on the DLCST cohort when trained on the full NLST dataset. Training with 80% of NLST data resulted in non-inferior performance (mean AUC 0.92 [95%CI: 0.89, 0.96], <i>P</i> = .005). On the size-matched DLCST subset (59 malignant, 118 benign), the AI reached non-inferior clinician-level performance (mean AUC 0.82 [95% CI: 0.77, 0.86]) with 20% of the training data (<i>P</i> = .02). Conclusion The deep learning AI algorithm demonstrated excellent performance in assessing pulmonary nodule malignancy risk, achieving clinical level performance with a fraction of the training data and reaching peak performance before utilizing the full dataset. ©RSNA, 2025.

Physician-in-the-Loop Active Learning in Radiology Artificial Intelligence Workflows: Opportunities, Challenges, and Future Directions.

Luo M, Yousefirizi F, Rouzrokh P, Jin W, Alberts I, Gowdy C, Bouchareb Y, Hamarneh G, Klyuzhin I, Rahmim A

pubmed logopapersAug 20 2025
Artificial intelligence (AI) is being explored for a growing range of applications in radiology, including image reconstruction, image segmentation, synthetic image generation, disease classification, worklist triage, and examination scheduling. However, training accurate AI models typically requires substantial amounts of expert-labeled data, which can be time-consuming and expensive to obtain. Active learning offers a potential strategy for mitigating the impacts of such labeling requirements. In contrast with other machine-learning approaches used for data-limited situations, active learning aims to produce labeled datasets by identifying the most informative or uncertain data for human annotation, thereby reducing labeling burden to improve model performance under constrained datasets. This Review explores the application of active learning to radiology AI, focusing on the role of active learning in reducing the resources needed to train radiology AI models while enhancing physician-AI interaction and collaboration. We discuss how active learning can be incorporated into radiology workflows to promote physician-in-the-loop AI systems, presenting key active learning concepts and use cases for radiology-based tasks, including through literature-based examples. Finally, we provide summary recommendations for the integration of active learning in radiology workflows while highlighting relevant opportunities, challenges, and future directions.

Evolution and integration of artificial intelligence across the cancer continuum in women: advances in risk assessment, prevention, and early detection.

Desai M, Desai B

pubmed logopapersAug 20 2025
Artificial Intelligence (AI) is revolutionizing the prevention and control of breast cancer by improving risk assessment, prevention, and early diagnosis. Considering an emphasis on AI applications across the women's breast cancer spectrum, this review summarizes developments, existing applications, and future potential prospects. We conducted an in-depth review of the literature on AI applications in breast cancer risk prediction, prevention, and early detection from 2000 to 2025, with particular emphasis on Explainable AI (XAI), deep learning (DL), and machine learning (ML). We examined algorithmic fairness, model transparency, dataset representation, and clinical performance indicators. As compared to traditional methods, AI-based models continuously enhanced risk categorization, screening sensitivity, and early detection (AUCs ranging from 0.65 to 0.975). However, challenges remain in algorithmic bias, underrepresentation of minority populations, and limited external validation. Remarkably, 58% of public datasets focused on mammography, leaving gaps in modalities such as tomosynthesis and histopathology. AI technologies have an enormous number of opportunities for enhancing the diagnosis and treatment of breast cancer. However, transparent models, inclusive datasets, and standardized frameworks for explainability and external validation should be given the greatest attention in subsequent studies to ensure equitable and effective implementation.
Page 62 of 2372364 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.