Sort by:
Page 37 of 39382 results

Predicting the efficacy of bevacizumab on peritumoral edema based on imaging features and machine learning.

Bai X, Feng M, Ma W, Wang S

pubmed logopapersMay 8 2025
This study proposes a novel approach to predict the efficacy of bevacizumab (BEV) in treating peritumoral edema in metastatic brain tumor patients by integrating advanced machine learning (ML) techniques with comprehensive imaging and clinical data. A retrospective analysis was performed on 300 patients who received BEV treatment from September 2013 to January 2024. The dataset incorporated 13 predictive features: 8 clinical variables and 5 radiological variables. The dataset was divided into a training set (70%) and a test set (30%) using stratified sampling. Data preprocessing was carried out through methods such as handling missing values with the MICE method, detecting and adjusting outliers, and feature scaling. Four algorithms, namely Random Forest (RF), Logistic Regression, Gradient Boosting Tree, and Naive Bayes, were selected to construct binary classification models. A tenfold cross-validation strategy was implemented during training, and techniques like regularization, hyperparameter optimization, and oversampling were used to mitigate overfitting. The RF model demonstrated superior performance, achieving an accuracy of 0.89, a precision of 0.94, F1-score of 0.92, with both AUC-ROC and AUC-PR values reaching 0.91. Feature importance analysis consistently identified edema volume as the most significant predictor, followed by edema index, patient age, and tumor volume. Traditional multivariate logistic regression corroborated these findings, confirming that edema volume and edema index were independent predictors (p < 0.01). Our results highlight the potential of ML-driven predictive models in optimizing BEV treatment selection, reducing unnecessary treatment risks, and improving clinical decision-making in neuro-oncology.

Cross-scale prediction of glioblastoma MGMT methylation status based on deep learning combined with magnetic resonance images and pathology images

Wu, X., Wei, W., Li, Y., Ma, M., Hu, Z., Xu, Y., Hu, W., Chen, G., Zhao, R., Kang, X., Yin, H., Xi, Y.

medrxiv logopreprintMay 8 2025
BackgroundIn glioblastoma (GBM), promoter methylation of the O6-methylguanine-DNA methyltransferase (MGMT) is associated with beneficial chemotherapy but has not been accurately evaluated based on radiological and pathological sections. To develop and validate an MRI and pathology image-based deep learning radiopathomics model for predicting MGMT promoter methylation in patients with GBM. MethodsA retrospective collection of pathologically confirmed isocitrate dehydrogenase (IDH) wild-type GBM patients (n=207) from three centers was performed, all of whom underwent MRI scanning within 2 weeks prior to surgery. The pre-trained ResNet50 was used as the feature extractor. Features of 1024 dimensions were extracted from MRI and pathological images, respectively, and the features were screened for modeling. Then feature fusion was performed by calculating the normalized multimode MRI fusion features and pathological features, and prediction models of MGMT based on deep learning radiomics, pathomics, and radiopathomics (DLRM, DLPM, DLRPM) were constructed and applied to internal and external validation cohorts. ResultsIn the training, internal and external validation cohorts, the DLRPM further improved the predictive performance, with a significantly better predictive performance than the DLRM and DLPM, with AUCs of 0.920 (95% CI 0.870-0.968), 0.854 (95% CI 0.702-1), and 0.840 (95% CI 0.625-1). ConclusionWe developed and validated cross-scale radiology and pathology models for predicting MGMT methylation status, with DLRPM predicting the best performance, and this cross-scale approach paves the way for further research and clinical applications in the future.

Weakly supervised language models for automated extraction of critical findings from radiology reports.

Das A, Talati IA, Chaves JMZ, Rubin D, Banerjee I

pubmed logopapersMay 8 2025
Critical findings in radiology reports are life threatening conditions that need to be communicated promptly to physicians for timely management of patients. Although challenging, advancements in natural language processing (NLP), particularly large language models (LLMs), now enable the automated identification of key findings from verbose reports. Given the scarcity of labeled critical findings data, we implemented a two-phase, weakly supervised fine-tuning approach on 15,000 unlabeled Mayo Clinic reports. This fine-tuned model then automatically extracted critical terms on internal (Mayo Clinic, n = 80) and external (MIMIC-III, n = 123) test datasets, validated against expert annotations. Model performance was further assessed on 5000 MIMIC-IV reports using LLM-aided metrics, G-eval and Prometheus. Both manual and LLM-based evaluations showed improved task alignment with weak supervision. The pipeline and model, publicly available under an academic license, can aid in critical finding extraction for research and clinical use ( https://github.com/dasavisha/CriticalFindings_Extract ).

Cross-Institutional Evaluation of Large Language Models for Radiology Diagnosis Extraction: A Prompt-Engineering Perspective.

Moassefi M, Houshmand S, Faghani S, Chang PD, Sun SH, Khosravi B, Triphati AG, Rasool G, Bhatia NK, Folio L, Andriole KP, Gichoya JW, Erickson BJ

pubmed logopapersMay 8 2025
The rapid evolution of large language models (LLMs) offers promising opportunities for radiology report annotation, aiding in determining the presence of specific findings. This study evaluates the effectiveness of a human-optimized prompt in labeling radiology reports across multiple institutions using LLMs. Six distinct institutions collected 500 radiology reports: 100 in each of 5 categories. A standardized Python script was distributed to participating sites, allowing the use of one common locally executed LLM with a standard human-optimized prompt. The script executed the LLM's analysis for each report and compared predictions to reference labels provided by local investigators. Models' performance using accuracy was calculated, and results were aggregated centrally. The human-optimized prompt demonstrated high consistency across sites and pathologies. Preliminary analysis indicates significant agreement between the LLM's outputs and investigator-provided reference across multiple institutions. At one site, eight LLMs were systematically compared, with Llama 3.1 70b achieving the highest performance in accurately identifying the specified findings. Comparable performance with Llama 3.1 70b was observed at two additional centers, demonstrating the model's robust adaptability to variations in report structures and institutional practices. Our findings illustrate the potential of optimized prompt engineering in leveraging LLMs for cross-institutional radiology report labeling. This approach is straightforward while maintaining high accuracy and adaptability. Future work will explore model robustness to diverse report structures and further refine prompts to improve generalizability.

Radiomics-based machine learning in prediction of response to neoadjuvant chemotherapy in osteosarcoma: A systematic review and meta-analysis.

Salimi M, Houshi S, Gholamrezanezhad A, Vadipour P, Seifi S

pubmed logopapersMay 8 2025
Osteosarcoma (OS) is the most common primary bone malignancy, and neoadjuvant chemotherapy (NAC) improves survival rates. However, OS heterogeneity results in variable treatment responses, highlighting the need for reliable, non-invasive tools to predict NAC response. Radiomics-based machine learning (ML) offers potential for identifying imaging biomarkers to predict treatment outcomes. This systematic review and meta-analysis evaluated the accuracy and reliability of radiomics models for predicting NAC response in OS. A systematic search was conducted in PubMed, Embase, Scopus, and Web of Science up to November 2024. Studies using radiomics-based ML for NAC response prediction in OS were included. Pooled sensitivity, specificity, and AUC for training and validation cohorts were calculated using bivariate random-effects modeling, with clinical-combined models analyzed separately. Quality assessment was performed using the QUADAS-2 tool, radiomics quality score (RQS), and METRICS scores. Sixteen studies were included, with 63 % using MRI and 37 % using CT. Twelve studies, comprising 1639 participants, were included in the meta-analysis. Pooled metrics for training cohorts showed an AUC of 0.93, sensitivity of 0.89, and specificity of 0.85. Validation cohorts achieved an AUC of 0.87, sensitivity of 0.81, and specificity of 0.82. Clinical-combined models outperformed radiomics-only models. The mean RQS score was 9.44 ± 3.41, and the mean METRICS score was 60.8 % ± 17.4 %. Radiomics-based ML shows promise for predicting NAC response in OS, especially when combined with clinical indicators. However, limitations in external validation and methodological consistency must be addressed.

Automated detection of bottom-of-sulcus dysplasia on MRI-PET in patients with drug-resistant focal epilepsy

Macdonald-Laurs, E., Warren, A. E. L., Mito, R., Genc, S., Alexander, B., Barton, S., Yang, J. Y., Francis, P., Pardoe, H. R., Jackson, G., Harvey, A. S.

medrxiv logopreprintMay 8 2025
Background and ObjectivesBottom-of-sulcus dysplasia (BOSD) is a diagnostically challenging subtype of focal cortical dysplasia, 60% being missed on patients first MRI. Automated MRI-based detection methods have been developed for focal cortical dysplasia, but not BOSD specifically. Use of FDG-PET alongside MRI is not established in automated methods. We report the development and performance of an automated BOSD detector using combined MRI+PET data. MethodsThe training set comprised 54 mostly operated patients with BOSD. The test sets comprised 17 subsequently diagnosed patients with BOSD from the same center, and 12 published patients from a different center. 81% patients across training and test sets had reportedly normal first MRIs and most BOSDs were <1.5cm3. In the training set, 12 features from T1-MRI, FLAIR-MRI and FDG-PET were evaluated using a novel "pseudo-control" normalization approach to determine which features best distinguished dysplastic from normal-appearing cortex. Using the Multi-centre Epilepsy Lesion Detection groups machine-learning detection method with the addition of FDG-PET, neural network classifiers were then trained and tested on MRI+PET features, MRI-only and PET-only. The proportion of patients whose BOSD was overlapped by the top output cluster, and the top five output clusters, were assessed. ResultsCortical and subcortical hypometabolism on FDG-PET were superior in discriminating dysplastic from normal-appearing cortex compared to MRI features. When the BOSD detector was trained on MRI+PET features, 87% BOSDs were overlapped by one of the top five clusters (69% top cluster) in the training set, 76% in the prospective test set (71% top cluster) and 75% in the published test set (42% top cluster). Cluster overlap was similar when the detector was trained and tested on PET-only features but lower when trained and tested on MRI-only features. ConclusionDetection of BOSD is possible using established MRI-based automated detection methods, supplemented with FDG-PET features and trained on a BOSD-specific cohort. In clinical practice, an MRI+PET BOSD detector could improve assessment and outcomes in seemingly MRI-negative patients being considered for epilepsy surgery.

Impact of the recent advances in coronary artery disease imaging on pilot medical certification and aviation safety: current state and future perspective.

Benjamin MM, Rabbat MG, Park W, Benjamin M, Davenport E

pubmed logopapersMay 7 2025
Coronary artery disease (CAD) is highly prevalent among pilots due to the nature of their lifestyle, and occupational stresses. CAD is one the most common conditions affecting pilots' medical certification and is frequently nondisclosed by pilots fearing the loss of their certification. Traditional screening methods, such as resting electrocardiograms (EKGs) and functional stress tests, have limitations, especially in detecting non-obstructive CAD. Recent advances in cardiac imaging are challenging the current paradigms of CAD screening and risk assessment protocols, offering tools uniquely suited to address the occupational health challenges faced by pilots. Coronary artery calcium scoring (CACS) has proven valuable in refining risk stratification in asymptomatic individuals. Coronary computed tomography angiography (CCTA), is being increasingly adopted as a superior tool for ruling out CAD in symptomatic individuals, assessing plaque burden as well as morphologically identifying vulnerable plaque. CT-derived fractional flow reserve (CT-FFR) adds a physiologic component to the anatomical prowess of CCTA. Cardiac magnetic resonance imaging (CMR) is now used as a prognosticating tool following a coronary event as well as a stress testing modality. Investigational technologies like pericoronary fat attenuation and artificial intelligence (AI)-enabled plaque quantification hold the promise of enhancing diagnostic accuracy and risk stratification. This review highlights the interplay between occupational demands, regulatory considerations, and the limitations of the traditional modalities for pilot CAD screening and surveillance. We also discuss the potential role of the recent advances in cardiac imaging in optimizing pilot health and flight safety.

Radiological evaluation and clinical implications of deep learning- and MRI-based synthetic CT for the assessment of cervical spine injuries.

Fischer G, Schlosser TPC, Dietrich TJ, Kim OC, Zdravkovic V, Martens B, Fehlings MG, Jans L, Vereecke E, Stienen MN, Hejrati N

pubmed logopapersMay 7 2025
Efficient evaluation of soft tissues and bony structures following cervical spine trauma is critical. We sought to evaluate the diagnostic validity of magnetic resonance imaging (MRI)-based synthetic CT (sCT) compared with conventional computed tomography (CT) for cervical spine injuries. In a prospective, multicenter study, patients with cervical spine injuries underwent CT and MRI within 48 h after injury. A panel of five clinicians independently reviewed the images for diagnostic accuracy, lesion characterization (AO Spine classification), and soft tissue trauma. Fracture visibility, anterior (AVH) and posterior wall height (PVH), vertebral body angle (VBA), segmental kyphosis (SK), with corresponding interobserver reliability (intraclass correlation coefficients (ICC)) and intermodal differences (Fleiss' Kappa), were recorded. The accuracy of estimating Hounsfield unit (HU) values and mean cortical surface distances were measured. Thirty-seven patients (44 cervical spine fractures) were enrolled. sCT demonstrated a sensitivity of 97.3% for visualizing fractures. Intermodal agreement regarding injury classification indicated almost perfect agreement (κ = 0.922; p < 0.001). Inter-reader ICCs were good to excellent (CT vs. sCT): AVH (0.88, 0.87); PVH (0.87, 0.88); VBA (0.78, 0.76); SK (0.77, 0.93). Intermodal agreement showed a mean absolute difference of 0.3 mm (AVH), 0.3 mm (PVH), 1.15° (VBA) and 0.51° (SK), respectively. MRI visualized additional soft tissue trauma in 56.8% of patients. Voxelwise comparisons of sCT showed good to excellent agreement with CT in terms of HUs (mean absolute error of 20 (SD ± 62)) and a mean absolute cortical surface distance of 0.45 mm (SD ± 0.13). sCT is a promising, radiation-free imaging technique for diagnosing cervical spine injuries with similar accuracy to CT. Question Assessing the accuracy of MRI-based synthetic CT (sCT) for fracture visualization and classification in comparison to the gold standard of CT for cervical spine injuries. Findings sCT demonstrated a 97.3% sensitivity in detecting fractures and exhibited near-perfect intermodal agreement in classifying injuries according to the AO Spine classification system. Clinical relevance sCT is a promising, radiation-free imaging modality that offers comparable accuracy to CT in visualizing and classifying cervical spine injuries. The combination of conventional MRI sequences for soft tissue evaluation with sCT reconstruction for bone visualization provides comprehensive diagnostic information.

Enhancing efficient deep learning models with multimodal, multi-teacher insights for medical image segmentation.

Hossain KF, Kamran SA, Ong J, Tavakkoli A

pubmed logopapersMay 7 2025
The rapid evolution of deep learning has dramatically enhanced the field of medical image segmentation, leading to the development of models with unprecedented accuracy in analyzing complex medical images. Deep learning-based segmentation holds significant promise for advancing clinical care and enhancing the precision of medical interventions. However, these models' high computational demand and complexity present significant barriers to their application in resource-constrained clinical settings. To address this challenge, we introduce Teach-Former, a novel knowledge distillation (KD) framework that leverages a Transformer backbone to effectively condense the knowledge of multiple teacher models into a single, streamlined student model. Moreover, it excels in the contextual and spatial interpretation of relationships across multimodal images for more accurate and precise segmentation. Teach-Former stands out by harnessing multimodal inputs (CT, PET, MRI) and distilling the final predictions and the intermediate attention maps, ensuring a richer spatial and contextual knowledge transfer. Through this technique, the student model inherits the capacity for fine segmentation while operating with a significantly reduced parameter set and computational footprint. Additionally, introducing a novel training strategy optimizes knowledge transfer, ensuring the student model captures the intricate mapping of features essential for high-fidelity segmentation. The efficacy of Teach-Former has been effectively tested on two extensive multimodal datasets, HECKTOR21 and PI-CAI22, encompassing various image types. The results demonstrate that our KD strategy reduces the model complexity and surpasses existing state-of-the-art methods to achieve superior performance. The findings of this study indicate that the proposed methodology could facilitate efficient segmentation of complex multimodal medical images, supporting clinicians in achieving more precise diagnoses and comprehensive monitoring of pathological conditions ( https://github.com/FarihaHossain/TeachFormer ).

Deep learning approaches for classification tasks in medical X-ray, MRI, and ultrasound images: a scoping review.

Laçi H, Sevrani K, Iqbal S

pubmed logopapersMay 7 2025
Medical images occupy the largest part of the existing medical information and dealing with them is challenging not only in terms of management but also in terms of interpretation and analysis. Hence, analyzing, understanding, and classifying them, becomes a very expensive and time-consuming task, especially if performed manually. Deep learning is considered a good solution for image classification, segmentation, and transfer learning tasks since it offers a large number of algorithms to solve such complex problems. PRISMA-ScR guidelines have been followed to conduct the scoping review with the aim of exploring how deep learning is being used to classify a broad spectrum of diseases diagnosed using an X-ray, MRI, or Ultrasound image modality.Findings contribute to the existing research by outlining the characteristics of the adopted datasets and the preprocessing or augmentation techniques applied to them. The authors summarized all relevant studies based on the deep learning models used and the accuracy achieved for classification. Whenever possible, they included details about the hardware and software configurations, as well as the architectural components of the models employed. Moreover, the models that achieved the highest accuracy in disease classification were highlighted, along with their strengths. The authors also discussed the limitations of the current approaches and proposed future directions for medical image classification.
Page 37 of 39382 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.