Sort by:
Page 192 of 2352345 results

Prediction of mammographic breast density based on clinical breast ultrasound images using deep learning: a retrospective analysis.

Bunnell A, Valdez D, Wolfgruber TK, Quon B, Hung K, Hernandez BY, Seto TB, Killeen J, Miyoshi M, Sadowski P, Shepherd JA

pubmed logopapersJun 1 2025
Breast density, as derived from mammographic images and defined by the Breast Imaging Reporting & Data System (BI-RADS), is one of the strongest risk factors for breast cancer. Breast ultrasound is an alternative breast cancer screening modality, particularly useful in low-resource, rural contexts. To date, breast ultrasound has not been used to inform risk models that need breast density. The purpose of this study is to explore the use of artificial intelligence (AI) to predict BI-RADS breast density category from clinical breast ultrasound imaging. We compared deep learning methods for predicting breast density directly from breast ultrasound imaging, as well as machine learning models from breast ultrasound image gray-level histograms alone. The use of AI-derived breast ultrasound breast density as a breast cancer risk factor was compared to clinical BI-RADS breast density. Retrospective (2009-2022) breast ultrasound data were split by individual into 70/20/10% groups for training, validation, and held-out testing for reporting results. 405,120 clinical breast ultrasound images from 14,066 women (mean age 53 years, range 18-99 years) with clinical breast ultrasound exams were retrospectively selected for inclusion from three institutions: 10,393 training (302,574 images), 2593 validation (69,842), and 1074 testing (28,616). The AI model achieves AUROC 0.854 in breast density classification and statistically significantly outperforms all image statistic-based methods. In an existing clinical 5-year breast cancer risk model, breast ultrasound AI and clinical breast density predict 5-year breast cancer risk with 0.606 and 0.599 AUROC (DeLong's test p-value: 0.67), respectively. BI-RADS breast density can be estimated from breast ultrasound imaging with high accuracy. The AI model provided superior estimates to other machine learning approaches. Furthermore, we demonstrate that age-adjusted, AI-derived breast ultrasound breast density provides similar predictive power to mammographic breast density in our population. Estimated breast density from ultrasound may be useful in performing breast cancer risk assessment in areas where mammography may not be available. National Cancer Institute.

Predictive models of severe disease in patients with COVID-19 pneumonia at an early stage on CT images using topological properties.

Iwasaki T, Arimura H, Inui S, Kodama T, Cui YH, Ninomiya K, Iwanaga H, Hayashi T, Abe O

pubmed logopapersJun 1 2025
Prediction of severe disease (SVD) in patients with coronavirus disease (COVID-19) pneumonia at an early stage could allow for more appropriate triage and improve patient prognosis. Moreover, the visualization of the topological properties of COVID-19 pneumonia could help clinical physicians describe the reasons for their decisions. We aimed to construct predictive models of SVD in patients with COVID-19 pneumonia at an early stage on computed tomography (CT) images using SVD-specific features that can be visualized on accumulated Betti number (BN) maps. BN maps (b0 and b1 maps) were generated by calculating the BNs within a shifting kernel in a manner similar to a convolution. Accumulated BN maps were constructed by summing BN maps (b0 and b1 maps) derived from a range of multiple-threshold values. Topological features were computed as intrinsic topological properties of COVID-19 pneumonia from the accumulated BN maps. Predictive models of SVD were constructed with two feature selection methods and three machine learning models using nested fivefold cross-validation. The proposed model achieved an area under the receiver-operating characteristic curve of 0.854 and a sensitivity of 0.908 in a test fold. These results suggested that topological image features could characterize COVID-19 pneumonia at an early stage as SVD.

A magnetic resonance imaging (MRI)-based deep learning radiomics model predicts recurrence-free survival in lung cancer patients after surgical resection of brain metastases.

Li B, Li H, Chen J, Xiao F, Fang X, Guo R, Liang M, Wu Z, Mao J, Shen J

pubmed logopapersJun 1 2025
To develop and validate a magnetic resonance imaging (MRI)-based deep learning radiomics model (DLRM) to predict recurrence-free survival (RFS) in lung cancer patients after surgical resection of brain metastases (BrMs). A total of 215 lung cancer patients with BrMs confirmed by surgical pathology were retrospectively included in five centres, 167 patients were assigned to the training cohort, and 48 to the external test cohort. All patients underwent regular follow-up brain MRIs. Clinical and morphological MRI models for predicting RFS were built using univariate and multivariate Cox regressions, respectively. Handcrafted and deep learning (DL) signatures were constructed from BrMs pretreatment MR images using the least absolute shrinkage and selection operator (LASSO) method, respectively. A DLRM was established by integrating the clinical and morphological MRI predictors, handcrafted and DL signatures based on the multivariate Cox regression coefficients. The Harrell C-index, area under the receiver operating characteristic curve (AUC), and Kaplan-Meier's survival analysis were used to evaluate model performance. The DLRM showed satisfactory performance in predicting RFS and 6- to 18-month intracranial recurrence in lung cancer patients after BrMs resection, achieving a C-index of 0.79 and AUCs of 0.84-0.90 in the training set and a C-index of 0.74 and AUCs of 0.71-0.85 in the external test set. The DLRM outperformed the clinical model, morphological MRI model, handcrafted signature, DL signature, and clinical-morphological MRI model in predicting RFS (P < 0.05). The DLRM successfully classified patients into high-risk and low-risk intracranial recurrence groups (P < 0.001). This MRI-based DLRM could predict RFS in lung cancer patients after surgical resection of BrMs.

Radiomics across modalities: a comprehensive review of neurodegenerative diseases.

Inglese M, Conti A, Toschi N

pubmed logopapersJun 1 2025
Radiomics allows extraction from medical images of quantitative features that are able to reveal tissue patterns that are generally invisible to human observers. Despite the challenges in visually interpreting radiomic features and the computational resources required to generate them, they hold significant value in downstream automated processing. For instance, in statistical or machine learning frameworks, radiomic features enhance sensitivity and specificity, making them indispensable for tasks such as diagnosis, prognosis, prediction, monitoring, image-guided interventions, and evaluating therapeutic responses. This review explores the application of radiomics in neurodegenerative diseases, with a focus on Alzheimer's disease, Parkinson's disease, Huntington's disease, and multiple sclerosis. While radiomics literature often focuses on magnetic resonance imaging (MRI) and computed tomography (CT), this review also covers its broader application in nuclear medicine, with use cases of positron emission tomography (PET) and single-photon emission computed tomography (SPECT) radiomics. Additionally, we review integrated radiomics, where features from multiple imaging modalities are fused to improve model performance. This review also highlights the growing integration of radiomics with artificial intelligence and the need for feature standardisation and reproducibility to facilitate its translation into clinical practice.

An explainable adaptive channel weighting-based deep convolutional neural network for classifying renal disorders in computed tomography images.

Loganathan G, Palanivelan M

pubmed logopapersJun 1 2025
Renal disorders are a significant public health concern and a cause of mortality related to renal failure. Manual diagnosis is subjective, labor-intensive, and depends on the expertise of nephrologists in renal anatomy. To improve workflow efficiency and enhance diagnosis accuracy, we propose an automated deep learning model, called EACWNet, which incorporates adaptive channel weighting-based deep convolutional neural network and explainable artificial intelligence. The proposed model categorizes renal computed tomography images into various classes, such as cyst, normal, tumor, and stone. The adaptive channel weighting module utilizes both global and local contextual insights to refine the final feature map channel weights through the integration of a scale-adaptive channel attention module in the higher convolutional blocks of the VGG-19 backbone model employed in the proposed method. The efficacy of the EACWNet model has been assessed using a publicly available renal CT images dataset, attaining an accuracy of 98.87% and demonstrating a 1.75% improvement over the backbone model. However, this model exhibits class-wise precision variation, achieving higher precision for cyst, normal, and tumor cases but lower precision for the stone class due to its inherent variability and heterogeneity. Furthermore, the model predictions have been subjected to additional analysis using the explainable artificial intelligence method such as local interpretable model-agnostic explanations, to visualize better and understand the model predictions.

Driving Knowledge to Action: Building a Better Future With Artificial Intelligence-Enabled Multidisciplinary Oncology.

Loaiza-Bonilla A, Thaker N, Chung C, Parikh RB, Stapleton S, Borkowski P

pubmed logopapersJun 1 2025
Artificial intelligence (AI) is transforming multidisciplinary oncology at an unprecedented pace, redefining how clinicians detect, classify, and treat cancer. From earlier and more accurate diagnoses to personalized treatment planning, AI's impact is evident across radiology, pathology, radiation oncology, and medical oncology. By leveraging vast and diverse data-including imaging, genomic, clinical, and real-world evidence-AI algorithms can uncover complex patterns, accelerate drug discovery, and help identify optimal treatment regimens for each patient. However, realizing the full potential of AI also necessitates addressing concerns regarding data quality, algorithmic bias, explainability, privacy, and regulatory oversight-especially in low- and middle-income countries (LMICs), where disparities in cancer care are particularly pronounced. This study provides a comprehensive overview of how AI is reshaping cancer care, reviews its benefits and challenges, and outlines ethical and policy implications in line with ASCO's 2025 theme, <i>Driving Knowledge to Action.</i> We offer concrete calls to action for clinicians, researchers, industry stakeholders, and policymakers to ensure that AI-driven, patient-centric oncology is accessible, equitable, and sustainable worldwide.

Tailoring ventilation and respiratory management in pediatric critical care: optimizing care with precision medicine.

Beauchamp FO, Thériault J, Sauthier M

pubmed logopapersJun 1 2025
Critically ill children admitted to the intensive care unit frequently need respiratory care to support the lung function. Mechanical ventilation is a complex field with multiples parameters to set. The development of precision medicine will allow clinicians to personalize respiratory care and improve patients' outcomes. Lung and diaphragmatic ultrasound, electrical impedance tomography, neurally adjusted ventilatory assist ventilation, as well as the use of monitoring data in machine learning models are increasingly used to tailor care. Each modality offers insights into different aspects of the patient's respiratory system function and enables the adjustment of treatment to better support the patient's physiology. Precision medicine in respiratory care has been associated with decreased ventilation time, increased extubation and ventilation wean success and increased ability to identify phenotypes to guide treatment and predict outcomes. This review will focus on the use of precision medicine in the setting of pediatric acute respiratory distress syndrome, asthma, bronchiolitis, extubation readiness trials and ventilation weaning, ventilation acquired pneumonia and other respiratory tract infections. Precision medicine is revolutionizing respiratory care and will decrease complications associated with ventilation. More research is needed to standardize its use and better evaluate its impact on patient outcomes.

Artificial intelligence in pediatric osteopenia diagnosis: evaluating deep network classification and model interpretability using wrist X-rays.

Harris CE, Liu L, Almeida L, Kassick C, Makrogiannis S

pubmed logopapersJun 1 2025
Osteopenia is a bone disorder that causes low bone density and affects millions of people worldwide. Diagnosis of this condition is commonly achieved through clinical assessment of bone mineral density (BMD). State of the art machine learning (ML) techniques, such as convolutional neural networks (CNNs) and transformer models, have gained increasing popularity in medicine. In this work, we employ six deep networks for osteopenia vs. healthy bone classification using X-ray imaging from the pediatric wrist dataset GRAZPEDWRI-DX. We apply two explainable AI techniques to analyze and interpret visual explanations for network decisions. Experimental results show that deep networks are able to effectively learn osteopenic and healthy bone features, achieving high classification accuracy rates. Among the six evaluated networks, DenseNet201 with transfer learning yielded the top classification accuracy at 95.2 %. Furthermore, visual explanations of CNN decisions provide valuable insight into the blackbox inner workings and present interpretable results. Our evaluation of deep network classification results highlights their capability to accurately differentiate between osteopenic and healthy bones in pediatric wrist X-rays. The combination of high classification accuracy and interpretable visual explanations underscores the promise of incorporating machine learning techniques into clinical workflows for the early and accurate diagnosis of osteopenia.

Fully automated image quality assessment based on deep learning for carotid computed tomography angiography: A multicenter study.

Fu W, Ma Z, Yang Z, Yu S, Zhang Y, Zhang X, Mei B, Meng Y, Ma C, Gong X

pubmed logopapersJun 1 2025
To develop and evaluate the performance of fully automated model based on deep learning and multiple logistics regression algorithm for image quality assessment (IQA) of carotid computed tomography angiography (CTA) images. This study retrospectively collected 840 carotid CTA images from four tertiary hospitals. Three radiologists independently assessed the image quality using a 3-point Likert scale, based on the degree of noise, vessel enhancement, arterial vessel contrast, vessel edge sharpness, and overall diagnostic acceptability. An automated assessment model was developed using a training dataset consisting of 600 carotid CTA images. The assessment steps included: (i) selection of objective representative slices; (ii) use of 3D Res U-net approach to extract objective indices from the representative slices and (iii) use of single objective index and multiple indices combinedly to develop logistic regression models for IQA. In the internal and external test datasets (n = 240), the performance of models was evaluated using sensitivity, specificity, precision, F-score, accuracy, the area under the receiver operating characteristic curve (AUC), and the IQA results of models was compared with radiologists' consensus. The representative slices were determined based on the same length model. The performance of multi-index model was excellent in internal and external test datasets with AUCs of 0.98 and 0.97. And the consistency between model and radiologists achieved 91.8% (95% CI: 87.0-96.5) and 92.6% (95 % CI: 86.9-98.4) in internal and external test datasets respectively. The fully automated multi-index model showed equivalent performance to the subjective perceptions of radiologists with greater efficiency for IQA.

Predicting lung cancer bone metastasis using CT and pathological imaging with a Swin Transformer model.

Li W, Zou X, Zhang J, Hu M, Chen G, Su S

pubmed logopapersJun 1 2025
Bone metastasis is a common and serious complication in lung cancer patients, leading to severe pain, pathological fractures, and reduced quality of life. Early prediction of bone metastasis can enable timely interventions and improve patient outcomes. In this study, we developed a multimodal Swin Transformer-based deep learning model for predicting bone metastasis risk in lung cancer patients by integrating CT imaging and pathological data. A total of 215 patients with confirmed lung cancer diagnoses, including those with and without bone metastasis, were included. The model was designed to process high-resolution CT images and digitized histopathological images, with the features extracted independently by two Swin Transformer networks. These features were then fused using decision-level fusion techniques to improve classification accuracy. The Swin-Dual Fusion Model achieved superior performance compared to single-modality models and conventional architectures such as ResNet50, with an AUC of 0.966 on the test data and 0.967 on the training data. This integrated model demonstrated high accuracy, sensitivity, and specificity, making it a promising tool for clinical application in predicting bone metastasis risk. The study emphasizes the potential of transformer-based models to revolutionize bone oncology through advanced multimodal analysis and early prediction of metastasis, ultimately improving patient care and treatment outcomes.
Page 192 of 2352345 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.