Sort by:
Page 18 of 2982974 results

Non-invasive prediction of the secondary enucleation risk in uveal melanoma based on pretreatment CT and MRI prior to stereotactic radiotherapy.

Yedekci Y, Arimura H, Jin Y, Yilmaz MT, Kodama T, Ozyigit G, Yazici G

pubmed logopapersAug 8 2025
The aim of this study was to develop a radiomic model to non-invasively predict the risk of secondary enucleation (SE) in patients with uveal melanoma (UM) prior to stereotactic radiotherapy using pretreatment computed tomography (CT) and magnetic resonance (MR) images. This retrospective study encompasses a cohort of 308 patients diagnosed with UM who underwent stereotactic radiosurgery (SRS) or fractionated stereotactic radiotherapy (FSRT) using the CyberKnife system (Accuray, Sunnyvale, CA, USA) between 2007 and 2018. Each patient received comprehensive ophthalmologic evaluations, including assessment of visual acuity, anterior segment examination, fundus examination, and ultrasonography. All patients were followed up for a minimum of 5 years. The cohort was composed of 65 patients who underwent SE (SE+) and 243 who did not (SE-). Radiomic features were extracted from pretreatment CT and MR images. To develop a robust predictive model, four different machine learning algorithms were evaluated using these features. The stacking model utilizing CT + MR radiomic features achieved the highest predictive performance, with an area under the curve (AUC) of 0.90, accuracy of 0.86, sensitivity of 0.81, and specificity of 0.90. The feature of robust mean absolute deviation derived from the Laplacian-of-Gaussian-filtered MR images was identified as the most significant predictor, demonstrating a statistically significant difference between SE+ and SE- cases (p = 0.005). Radiomic analysis of pretreatment CT and MR images can non-invasively predict the risk of SE in UM patients undergoing SRS/FSRT. The combined CT + MR radiomic model may inform more personalized therapeutic decisions, thereby reducing unnecessary radiation exposure and potentially improving patient outcomes.

Advanced dynamic ensemble framework with explainability driven insights for precision brain tumor classification across datasets.

Singh R, Gupta S, Ibrahim AO, Gabralla LA, Bharany S, Rehman AU, Hussen S

pubmed logopapersAug 8 2025
Accurate detection of brain tumors remains a significant challenge due to the diversity of tumor types along with human interventions during diagnostic process. This study proposes a novel ensemble deep learning system for accurate brain tumor classification using MRI data. The proposed system integrates fine-tuned Convolutional Neural Network (CNN), ResNet-50 and EfficientNet-B5 to create a dynamic ensemble framework that addresses existing challenges. An adaptive dynamic weight distribution strategy is employed during training to optimize the contribution of each networks in the framework. To address class imbalance and improve model generalization, a customized weighted cross-entropy loss function is incorporated. The model obtains improved interpretability through explainabile artificial intelligence (XAI) techniques, including Grad-CAM, SHAP, SmoothGrad, and LIME, providing deeper insights into prediction rationale. The proposed system achieves a classification accuracy of 99.4% on the test set, 99.48% on the validation set, and 99.31% in cross-dataset validation. Furthermore, entropy-based uncertainty analysis quantifies prediction confidence, yielding an average entropy of 0.3093 and effectively identifying uncertain predictions to mitigate diagnostic errors. Overall, the proposed framework demonstrates high accuracy, robustness, and interpretability, highlighting its potential for integration into automated brain tumor diagnosis systems.

BCDCNN: breast cancer deep convolutional neural network for breast cancer detection using MRI images.

Martina Jaincy DE, Pattabiraman V

pubmed logopapersAug 8 2025
Breast cancer (BC) is a kind of cancer that is created from the cells in breast tissue. This is a primary cancer that occurs in women. Earlier identification of BC is significant in the treatment process. To lessen unwanted biopsies, Magnetic Resonance Imaging (MRI) is utilized for diagnosing BC nowadays. MRI is the most recommended examination to detect and monitor BC and explain lesion areas as it has a better ability for soft tissue imaging. Even though, it is a time-consuming procedure and requires skilled radiologists. Here, Breast Cancer Deep Convolutional Neural Network (BCDCNN) is presented for Breast Cancer Detection (BCD) using MRI images. At first, the input image is taken from the database and subjected to a pre-processing segment. Adaptive Kalman filter (AKF) is utilized to execute the pre-processing phase. Thereafter, cancer area segmentation is conducted on filtered images by Pyramid Scene Parsing Network (PSPNet). To improve segmentation accuracy and adapt to complex tumor boundaries, PSPNet is optimized using the Jellyfish Search Optimizer (JSO). It is a recent nature-inspired metaheuristic that converges to an optimal solution in fewer iterations compared to conventional methods. Then, image augmentation is performed that includes augmentation techniques namely rotation, random erasing and slipping. Afterwards, feature extraction is done and finally, BCD is conducted employing BCDCNN, wherein the loss function is newly designed based on an adaptive error similarity. It improves the overall performance by dynamically emphasizing samples with ambiguous predictions, enabling the model to focus more on diagnostically challenging cases and enhancing its discriminative capability. Furthermore, BCDCNN acquired 90.2% of accuracy, 90.6% of sensitivity and 90.9% of specificity. The proposed method not only demonstrates strong classification performance but also holds promising potential for real-world clinical application in early and accurate breast cancer diagnosis.

A Deep Learning Model to Detect Acute MCA Occlusion on High Resolution Non-Contrast Head CT.

Fussell DA, Lopez JL, Chang PD

pubmed logopapersAug 8 2025
To assess the feasibility and accuracy of a deep learning (DL) model to identify acute middle cerebral artery (MCA) occlusion using high resolution non-contrast CT (NCCT) imaging data. In this study, a total of 4,648 consecutive exams (July 2021 to December 2023) were retrospectively used for model training and validation, while an additional 1,011 consecutive exams (January 2024 to August 2024) were used for independent testing. Using high-resolution NCCT acquired at 1.0 mm slice thickness or less, MCA thrombus was labeled using same day CTA as ground-truth. A 3D DL model was trained for per-voxel thrombus segmentation, with the sum of positive voxels used to estimate likelihood of acute MCA occlusion. For detection of MCA M1 segment acute occlusion, the model yielded an AUROC of 0.952 [0.904 -1.00], accuracy of 93.6%[88.1 -98.2], sensitivity of 90.9% [83.1 -100], and specificity of 93.6% [88.0 -98.3]. Inclusion of M2 segment occlusions reduced performance only slightly, yielding an AUROC of 0.884 [0.825 -0.942], accuracy of 93.2% [85.1 -97.2], sensitivity of 77.4% [69.3 92.2], and specificity of 93.6% [85.1 -97.8]. A DL model can detect acute MCA occlusion from high resolution NCCT with accuracy approaching that of CTA. Using this tool, a majority of candidate thrombectomy patients may be identified with NCCT alone, which could aid stroke triage in settings that lack CTA or are otherwise resource constrained. DL= deep learning.

Enhancing B-mode-based breast cancer diagnosis via cross-attention fusion of H-scan and Nakagami imaging with multi-CAM-QUS-driven XAI.

Mondol SS, Hasan MK

pubmed logopapersAug 8 2025
B-mode ultrasound is widely employed for breast lesion diagnosis due to its affordability, widespread availability, and effectiveness, particularly in cases of dense breast tissue where mammography may be less sensitive. However, it disregards critical tissue information embedded in raw radiofrequency (RF) data. While both modalities have demonstrated promise in Computer-Aided Diagnosis (CAD), their combined potential remains largely unexplored.
Approach.This paper presents an automated breast lesion classification network that utilizes H-scan and Nakagami parametric images derived from RF ultrasound signals, combined with machine-generated B-mode images, seamlessly integrated through a Multi Modal Cross Attention Fusion (MM-CAF) mechanism to extract complementary information. The proposed architecture also incorporates an attention-guided modified InceptionV3 for feature extraction, a Knowledge-Guided Cross-Modality Learning (KGCML) module for inter‑modal knowledge sharing, and Attention-Driven Context Enhancement (ADCE) modules to improve contextual understanding and fusion with the classification network. The network employs categorical cross-entropy loss, a Multi-CAM-based loss to guide learning toward accurate lesion-specific features, and a Multi-QUS-based loss to embed clinically meaningful domain knowledge and effectively distinguishing between benign and malignant lesions, all while supporting explainable AI (XAI) principles.
Main results. Experiments conducted on multi-center breast ultrasound datasets--BUET-BUSD, ATL, and OASBUD--characterized by demographic diversity, demonstrate the effectiveness of the proposed approach, achieving classification accuracies of 92.54%, 89.93%, and 90.0%, respectively, along with high interpretability and trustworthiness. These results surpass those of existing methods based on B-mode and/or RF data, highlighting the superior performance and robustness of the proposed technique. By integrating complementary RF‑derived information with B‑mode imaging with pseudo‑segmentation and domain‑informed loss functions, our method significantly boosts lesion classification accuracy-enabling fully automated, explainable CAD and paving the way for widespread clinical adoption of AI‑driven breast screening.

LLM-Based Extraction of Imaging Features from Radiology Reports: Automating Disease Activity Scoring in Crohn's Disease.

Dehdab R, Mankertz F, Brendel JM, Maalouf N, Kaya K, Afat S, Kolahdoozan S, Radmard AR

pubmed logopapersAug 8 2025
Large Language Models (LLMs) offer a promising solution for extracting structured clinical information from free-text radiology reports. The Simplified Magnetic Resonance Index of Activity (sMARIA) is a validated scoring system used to quantify Crohn's disease (CD) activity based on Magnetic Resonance Enterography (MRE) findings. This study aims to evaluate the performance of two advanced LLMs in extracting key imaging features and computing sMARIA scores from free-text MRE reports. This retrospective study included 117 anonymized free-text MRE reports from patients with confirmed CD. ChatGPT (GPT-4o) and DeepSeek (DeepSeek-R1) were prompted using a structured input designed to extract four key radiologic features relevant to sMARIA: bowel wall thickness, mural edema, perienteric fat stranding, and ulceration. LLM outputs were evaluated against radiologist annotations at both the segment and feature levels. Segment-level agreement was assessed using accuracy, mean absolute error (MAE) and Pearson correlation. Feature-level performance was evaluated using sensitivity, specificity, precision, and F1-score. Errors including confabulations were recorded descriptively. ChatGPT achieved a segment-level accuracy of 98.6%, MAE of 0.17, and Pearson correlation of 0.99. DeepSeek achieved 97.3% accuracy, MAE of 0.51, and correlation of 0.96. At the feature level, ChatGPT yielded an F1-score of 98.8% (precision 97.8%, sensitivity 99.9%), while DeepSeek achieved 97.9% (precision 96.0%, sensitivity 99.8%). LLMs demonstrate near-human accuracy in extracting structured information and computing sMARIA scores from free-text MRE reports. This enables automated assessment of CD activity without altering current reporting workflows, supporting longitudinal monitoring and large-scale research. Integration into clinical decision support systems may be feasible in the future, provided appropriate human oversight and validation are ensured.

Value of artificial intelligence in neuro-oncology.

Voigtlaender S, Nelson TA, Karschnia P, Vaios EJ, Kim MM, Lohmann P, Galldiks N, Filbin MG, Azizi S, Natarajan V, Monje M, Dietrich J, Winter SF

pubmed logopapersAug 8 2025
CNS cancers are complex, difficult-to-treat malignancies that remain insufficiently understood and mostly incurable, despite decades of research efforts. Artificial intelligence (AI) is poised to reshape neuro-oncological practice and research, driving advances in medical image analysis, neuro-molecular-genetic characterisation, biomarker discovery, therapeutic target identification, tailored management strategies, and neurorehabilitation. This Review examines key opportunities and challenges associated with AI applications along the neuro-oncological care trajectory. We highlight emerging trends in foundation models, biophysical modelling, synthetic data, and drug development and discuss regulatory, operational, and ethical hurdles across data, translation, and implementation gaps. Near-term clinical translation depends on scaling validated AI solutions for well defined clinical tasks. In contrast, more experimental AI solutions offer broader potential but require technical refinement and resolution of data and regulatory challenges. Addressing both general and neuro-oncology-specific issues is essential to unlock the full potential of AI and ensure its responsible, effective, and needs-based integration into neuro-oncological practice.

A Co-Plane Machine Learning Model Based on Ultrasound Radiomics for the Evaluation of Diabetic Peripheral Neuropathy.

Jiang Y, Peng R, Liu X, Xu M, Shen H, Yu Z, Jiang Z

pubmed logopapersAug 8 2025
Detection of diabetic peripheral neuropathy (DPN) is critical for preventing severe complications. Machine learning (ML) and radiomics offer promising approaches for the diagnosis of DPN; however, their application in ultrasound-based detection of DPN remains limited. Moreover, there is no consensus on whether longitudinal or transverse ultrasound planes provide more robust radiomic features for nerve evaluation. This study aimed to analyze and compare radiomic features from different ultrasound planes of the tibial nerve and to develop a co-plane fusion ML model to enhance the diagnostic accuracy of DPN. In our study, a total of 516 feet from 262 diabetics across two institutions was analyzed and stratified into a training cohort (n = 309), an internal testing cohort (n = 133), and an external testing cohort (n = 74). A total of 1316 radiomic features were extracted from both transverse and longitudinal planes of the tibial nerve. After feature selection, six ML algorithms were utilized to construct radiomics models based on transverse, longitudinal, and combined planes. The performance of these models was assessed using receiver operating characteristic curves, calibration curves, and decision curve analysis (DCA). Shapley Additive exPlanations (SHAP) were employed to elucidate the key features and their contributions to predictions within the optimal model. The co-plane Support Vector Machine (SVM) model exhibited superior performance, achieving AUC values of 0.90 (95% CI: 0.86-0.93), 0.88 (95% CI: 0.84-0.91), and 0.70 (95% CI: 0.64-0.76) in the training, internal testing, and external testing cohorts, respectively. These results significantly exceeded those of the single-plane models, as determined by the DeLong test (P < 0.05). Calibration curves and DCA curve indicated a good model fit and suggested potential clinical utility. Furthermore, SHAP were employed to explain the model. The co-plane SVM model, which integrates transverse and longitudinal radiomic features of the tibial nerve, demonstrated optimal performance in DPN prediction, thereby significantly enhancing the efficacy of DPN diagnosis. This model may serve as a robust tool for noninvasive assessment of DPN, highlighting its promising applicability in clinical settings.

Clinical insights to improve medical deep learning design: A comprehensive review of methods and benefits.

Thornblad TAE, Ewals LJS, Nederend J, Luyer MDP, De With PHN, van der Sommen F

pubmed logopapersAug 8 2025
The success of deep learning and computer vision of natural images has led to an increased interest in medical image deep learning applications. However, introducing black-box deep learning models leaves little room for domain-specific knowledge when making the final diagnosis. For medical computer vision applications, not only accuracy, but also robustness, interpretability and explainability are essential to ensure trust for clinicians. Medical deep learning applications can therefore benefit from insights into the application at hand by involving clinical staff and considering the clinical diagnostic process. In this review, different clinically-inspired methods are surveyed, including clinical insights used at different stages of deep learning design for three-dimensional (3D) computed tomography (CT) image data. This review is conducted by investigating 400 research articles, covering different deep learning-based approaches for diagnosis of different diseases, in terms of including clinical insights in the published work. Based on this, a further detailed review is conducted of the 47 scientific articles using clinical inspiration. The clinically-inspired methods were found to be made with respect to preparation for training, 3D medical image data processing, integration of clinical data and model architecture selection and development. This highlights different ways in which domain-specific knowledge can be used in the design of deep learning systems.

Artificial intelligence in radiology, nuclear medicine and radiotherapy: Perceptions, experiences and expectations from the medical radiation technologists in Central and South America.

Mendez-Avila C, Torre S, Arce YV, Contreras PR, Rios J, Raza NO, Gonzalez H, Hernandez YC, Cabezas A, Lucero M, Ezquerra V, Malamateniou C, Solis-Barquero SM

pubmed logopapersAug 8 2025
Artificial intelligence (AI) has been growing in the field of medical imaging and clinical practice. It is essential to comprehend the perceptions, experiences, and expectations regarding AI implementation among medical radiation technologists (MRTs) working in radiology, nuclear medicine, and radiotherapy. Some global studies tend to inform about AI implementation, but there is almost no information from Central and South American professionals. This study aimed to understand the perceptions of the impact of AI on the MRTs, as well as the varying experiences and expectations these professionals have regarding its implementation. An online survey was conducted among Central and South American MRTs for the collection of qualitative data concerning the primary perceptions regarding the implementation of AI in radiology, nuclear medicine, and radiotherapy. The analysis considered descriptive statistics in closed-ended questions and dimension codification for open-ended responses. A total of 398 valid responses were obtained, and it was determined that 98.5 % (n = 392) of the respondents agreed with the implementation of AI in clinical practice. The primary contributions of AI that were identified were the optimization of processes, greater diagnostic accuracy, and the possibility of job expansion. On the other hand, concerns were raised regarding the delay in providing training opportunities and limited avenues for learning in this domain, the displacement of roles, and dehumanization in clinical practice. This sample of participants likely represents mostly professionals who have more AI knowledge than others. It is therefore important to interpret these results with caution. Our findings indicate strong professional confidence in AI's capacity to improve imaging quality while maintaining patient safety standards. However, user resistance may disturb implementation efforts. Our results highlight the dual need for (a) comprehensive professional training programs and (b) user education initiatives that demonstrate AI's clinical value in radiology. We therefore recommend a carefully structured, phased AI implementation approach, guided by evidence-based guidelines and validated training protocols from existing research. AI is already present in medical imaging, but its effective implementations depend on building acceptance and trust through education and training, enabling MRTs to use it safely for patient benefit.
Page 18 of 2982974 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.