Sort by:
Page 285 of 6596585 results

Oh G, Paik S, Jo SW, Choi HJ, Yoo RE, Choi SH

pubmed logopapersAug 8 2025
To evaluate the utility of a deep learning (DL)-based image enhancement for improving the image quality and diagnostic performance of 3D contrast-enhanced T1-weighted black blood (BB) MR imaging for brain metastases. This retrospective study included 126 patients with and 121 patients without brain metastasis who underwent 3-T MRI examinations. Commercially available DL-based MR image enhancement software was utilized for image post-processing. The signal-to-noise ratio (SNR) and contrast-to-noise ratio (CNR) of enhancing lesions were measured. For qualitative assessment and diagnostic performance evaluation, two radiologists graded the overall image quality, noise, and artifacts of each image and the conspicuity of visible lesions. The Wilcoxon signed-rank test and regression analyses with generalized estimating equations (GEEs) were used for statistical analysis. For MR images that were not previously processed using other DL-based methods, SNR and CNR were higher in the DL-enhanced images than in the standard images (438.3 vs. 661.1, p < 0.01; 173.9 vs. 223.5, p < 0.01). Overall image quality and noise were improved in the DL images (p < 0.01, average score-5 proportion 38% vs. 65%; p < 0.01, 43% vs. 74%), whereas artifacts did not significantly differ (p ≥ 0.07). Sensitivity was increased after post-processing from 79 to 86% (p = 0.02), especially for lesions smaller than 5 mm (69 to 78%, p = 0.03), and changes in specificity (p = 0.24) and average false-positive (FP) count (p = 0.18) were not significant. DL image enhancement improves the image quality and diagnostic performance of 3D contrast-enhanced T1-weighted BB MR imaging for the detection of small brain metastases. Question Can deep learning (DL)-based image enhancement improve the image quality and diagnostic performance of 3D contrast-enhanced T1-weighted black blood (BB) MR imaging for brain metastases? Findings DL-based image enhancement improved image quality of thin slice BB MR images and sensitivity for brain metastasis, particularly for lesions smaller than 5 mm. Clinical relevance DL-based image enhancement on BB images may assist in the accurate diagnosis of brain metastasis by achieving better sensitivity while maintaining comparable specificity.

Saranya E, Chinnadurai M

pubmed logopapersAug 8 2025
By carefully analyzing latent image properties, content-based image retrieval (CBIR) systems are able to recover pertinent images without relying on text descriptions, natural language tags, or keywords related to the image. This search procedure makes it quite easy to automatically retrieve images in huge, well-balanced datasets. However, in the medical field, such datasets are usually not available. This study proposed an advanced DL technique to enhance the accuracy of image retrieval in complex medical datasets. The proposed model can be integrated into five stages, namely pre-processing, decomposing the images, feature extraction, dimensionality reduction, and classification with an image retrieval mechanism. The hybridized Wavelet-Hadamard Transform (HWHT) was utilized to obtain both low and high frequency detail for analysis. In order to extract the main characteristics, the Gray Level Co-occurrence Matrix (GLCM) was employed. Furthermore, to minimize feature complexity, Sine chaos based artificial rabbit optimization (SCARO) was utilized. By employing the Bhattacharyya Coefficient for improved similarity matching, the Bhattacharya Context performance aware global attention-based Transformer (BCGAT) improves classification accuracy. The experimental results proved that the COVID-19 Chest X-ray image dataset attained higher accuracy, precision, recall, and F1-Score of 99.5%, 97.1%, 97.1%, and 97.1%, 97.1%, respectively. However, the chest x-ray image (pneumonia) dataset has attained higher accuracy, precision, recall, and F1-score values of 98.60%, 98.49%, 97.40%, and 98.50%, respectively. For the NIH chest X-ray dataset, the accuracy value is 99.67%.

Yedekci Y, Arimura H, Jin Y, Yilmaz MT, Kodama T, Ozyigit G, Yazici G

pubmed logopapersAug 8 2025
The aim of this study was to develop a radiomic model to non-invasively predict the risk of secondary enucleation (SE) in patients with uveal melanoma (UM) prior to stereotactic radiotherapy using pretreatment computed tomography (CT) and magnetic resonance (MR) images. This retrospective study encompasses a cohort of 308 patients diagnosed with UM who underwent stereotactic radiosurgery (SRS) or fractionated stereotactic radiotherapy (FSRT) using the CyberKnife system (Accuray, Sunnyvale, CA, USA) between 2007 and 2018. Each patient received comprehensive ophthalmologic evaluations, including assessment of visual acuity, anterior segment examination, fundus examination, and ultrasonography. All patients were followed up for a minimum of 5 years. The cohort was composed of 65 patients who underwent SE (SE+) and 243 who did not (SE-). Radiomic features were extracted from pretreatment CT and MR images. To develop a robust predictive model, four different machine learning algorithms were evaluated using these features. The stacking model utilizing CT + MR radiomic features achieved the highest predictive performance, with an area under the curve (AUC) of 0.90, accuracy of 0.86, sensitivity of 0.81, and specificity of 0.90. The feature of robust mean absolute deviation derived from the Laplacian-of-Gaussian-filtered MR images was identified as the most significant predictor, demonstrating a statistically significant difference between SE+ and SE- cases (p = 0.005). Radiomic analysis of pretreatment CT and MR images can non-invasively predict the risk of SE in UM patients undergoing SRS/FSRT. The combined CT + MR radiomic model may inform more personalized therapeutic decisions, thereby reducing unnecessary radiation exposure and potentially improving patient outcomes.

Singh R, Gupta S, Ibrahim AO, Gabralla LA, Bharany S, Rehman AU, Hussen S

pubmed logopapersAug 8 2025
Accurate detection of brain tumors remains a significant challenge due to the diversity of tumor types along with human interventions during diagnostic process. This study proposes a novel ensemble deep learning system for accurate brain tumor classification using MRI data. The proposed system integrates fine-tuned Convolutional Neural Network (CNN), ResNet-50 and EfficientNet-B5 to create a dynamic ensemble framework that addresses existing challenges. An adaptive dynamic weight distribution strategy is employed during training to optimize the contribution of each networks in the framework. To address class imbalance and improve model generalization, a customized weighted cross-entropy loss function is incorporated. The model obtains improved interpretability through explainabile artificial intelligence (XAI) techniques, including Grad-CAM, SHAP, SmoothGrad, and LIME, providing deeper insights into prediction rationale. The proposed system achieves a classification accuracy of 99.4% on the test set, 99.48% on the validation set, and 99.31% in cross-dataset validation. Furthermore, entropy-based uncertainty analysis quantifies prediction confidence, yielding an average entropy of 0.3093 and effectively identifying uncertain predictions to mitigate diagnostic errors. Overall, the proposed framework demonstrates high accuracy, robustness, and interpretability, highlighting its potential for integration into automated brain tumor diagnosis systems.

Martina Jaincy DE, Pattabiraman V

pubmed logopapersAug 8 2025
Breast cancer (BC) is a kind of cancer that is created from the cells in breast tissue. This is a primary cancer that occurs in women. Earlier identification of BC is significant in the treatment process. To lessen unwanted biopsies, Magnetic Resonance Imaging (MRI) is utilized for diagnosing BC nowadays. MRI is the most recommended examination to detect and monitor BC and explain lesion areas as it has a better ability for soft tissue imaging. Even though, it is a time-consuming procedure and requires skilled radiologists. Here, Breast Cancer Deep Convolutional Neural Network (BCDCNN) is presented for Breast Cancer Detection (BCD) using MRI images. At first, the input image is taken from the database and subjected to a pre-processing segment. Adaptive Kalman filter (AKF) is utilized to execute the pre-processing phase. Thereafter, cancer area segmentation is conducted on filtered images by Pyramid Scene Parsing Network (PSPNet). To improve segmentation accuracy and adapt to complex tumor boundaries, PSPNet is optimized using the Jellyfish Search Optimizer (JSO). It is a recent nature-inspired metaheuristic that converges to an optimal solution in fewer iterations compared to conventional methods. Then, image augmentation is performed that includes augmentation techniques namely rotation, random erasing and slipping. Afterwards, feature extraction is done and finally, BCD is conducted employing BCDCNN, wherein the loss function is newly designed based on an adaptive error similarity. It improves the overall performance by dynamically emphasizing samples with ambiguous predictions, enabling the model to focus more on diagnostically challenging cases and enhancing its discriminative capability. Furthermore, BCDCNN acquired 90.2% of accuracy, 90.6% of sensitivity and 90.9% of specificity. The proposed method not only demonstrates strong classification performance but also holds promising potential for real-world clinical application in early and accurate breast cancer diagnosis.

Fussell DA, Lopez JL, Chang PD

pubmed logopapersAug 8 2025
To assess the feasibility and accuracy of a deep learning (DL) model to identify acute middle cerebral artery (MCA) occlusion using high resolution non-contrast CT (NCCT) imaging data. In this study, a total of 4,648 consecutive exams (July 2021 to December 2023) were retrospectively used for model training and validation, while an additional 1,011 consecutive exams (January 2024 to August 2024) were used for independent testing. Using high-resolution NCCT acquired at 1.0 mm slice thickness or less, MCA thrombus was labeled using same day CTA as ground-truth. A 3D DL model was trained for per-voxel thrombus segmentation, with the sum of positive voxels used to estimate likelihood of acute MCA occlusion. For detection of MCA M1 segment acute occlusion, the model yielded an AUROC of 0.952 [0.904 -1.00], accuracy of 93.6%[88.1 -98.2], sensitivity of 90.9% [83.1 -100], and specificity of 93.6% [88.0 -98.3]. Inclusion of M2 segment occlusions reduced performance only slightly, yielding an AUROC of 0.884 [0.825 -0.942], accuracy of 93.2% [85.1 -97.2], sensitivity of 77.4% [69.3 92.2], and specificity of 93.6% [85.1 -97.8]. A DL model can detect acute MCA occlusion from high resolution NCCT with accuracy approaching that of CTA. Using this tool, a majority of candidate thrombectomy patients may be identified with NCCT alone, which could aid stroke triage in settings that lack CTA or are otherwise resource constrained. DL= deep learning.

Mondol SS, Hasan MK

pubmed logopapersAug 8 2025
B-mode ultrasound is widely employed for breast lesion diagnosis due to its affordability, widespread availability, and effectiveness, particularly in cases of dense breast tissue where mammography may be less sensitive. However, it disregards critical tissue information embedded in raw radiofrequency (RF) data. While both modalities have demonstrated promise in Computer-Aided Diagnosis (CAD), their combined potential remains largely unexplored.&#xD;Approach.This paper presents an automated breast lesion classification network that utilizes H-scan and Nakagami parametric images derived from RF ultrasound signals, combined with machine-generated B-mode images, seamlessly integrated through a Multi Modal Cross Attention Fusion (MM-CAF) mechanism to extract complementary information. The proposed architecture also incorporates an attention-guided modified InceptionV3 for feature extraction, a Knowledge-Guided Cross-Modality Learning (KGCML) module for inter‑modal knowledge sharing, and Attention-Driven Context Enhancement (ADCE) modules to improve contextual understanding and fusion with the classification network. The network employs categorical cross-entropy loss, a Multi-CAM-based loss to guide learning toward accurate lesion-specific features, and a Multi-QUS-based loss to embed clinically meaningful domain knowledge and effectively distinguishing between benign and malignant lesions, all while supporting explainable AI (XAI) principles.&#xD;Main results. Experiments conducted on multi-center breast ultrasound datasets--BUET-BUSD, ATL, and OASBUD--characterized by demographic diversity, demonstrate the effectiveness of the proposed approach, achieving classification accuracies of 92.54%, 89.93%, and 90.0%, respectively, along with high interpretability and trustworthiness. These results surpass those of existing methods based on B-mode and/or RF data, highlighting the superior performance and robustness of the proposed technique. By integrating complementary RF‑derived information with B‑mode imaging with pseudo‑segmentation and domain‑informed loss functions, our method significantly boosts lesion classification accuracy-enabling fully automated, explainable CAD and paving the way for widespread clinical adoption of AI‑driven breast screening.

Dehdab R, Mankertz F, Brendel JM, Maalouf N, Kaya K, Afat S, Kolahdoozan S, Radmard AR

pubmed logopapersAug 8 2025
Large Language Models (LLMs) offer a promising solution for extracting structured clinical information from free-text radiology reports. The Simplified Magnetic Resonance Index of Activity (sMARIA) is a validated scoring system used to quantify Crohn's disease (CD) activity based on Magnetic Resonance Enterography (MRE) findings. This study aims to evaluate the performance of two advanced LLMs in extracting key imaging features and computing sMARIA scores from free-text MRE reports. This retrospective study included 117 anonymized free-text MRE reports from patients with confirmed CD. ChatGPT (GPT-4o) and DeepSeek (DeepSeek-R1) were prompted using a structured input designed to extract four key radiologic features relevant to sMARIA: bowel wall thickness, mural edema, perienteric fat stranding, and ulceration. LLM outputs were evaluated against radiologist annotations at both the segment and feature levels. Segment-level agreement was assessed using accuracy, mean absolute error (MAE) and Pearson correlation. Feature-level performance was evaluated using sensitivity, specificity, precision, and F1-score. Errors including confabulations were recorded descriptively. ChatGPT achieved a segment-level accuracy of 98.6%, MAE of 0.17, and Pearson correlation of 0.99. DeepSeek achieved 97.3% accuracy, MAE of 0.51, and correlation of 0.96. At the feature level, ChatGPT yielded an F1-score of 98.8% (precision 97.8%, sensitivity 99.9%), while DeepSeek achieved 97.9% (precision 96.0%, sensitivity 99.8%). LLMs demonstrate near-human accuracy in extracting structured information and computing sMARIA scores from free-text MRE reports. This enables automated assessment of CD activity without altering current reporting workflows, supporting longitudinal monitoring and large-scale research. Integration into clinical decision support systems may be feasible in the future, provided appropriate human oversight and validation are ensured.

Voigtlaender S, Nelson TA, Karschnia P, Vaios EJ, Kim MM, Lohmann P, Galldiks N, Filbin MG, Azizi S, Natarajan V, Monje M, Dietrich J, Winter SF

pubmed logopapersAug 8 2025
CNS cancers are complex, difficult-to-treat malignancies that remain insufficiently understood and mostly incurable, despite decades of research efforts. Artificial intelligence (AI) is poised to reshape neuro-oncological practice and research, driving advances in medical image analysis, neuro-molecular-genetic characterisation, biomarker discovery, therapeutic target identification, tailored management strategies, and neurorehabilitation. This Review examines key opportunities and challenges associated with AI applications along the neuro-oncological care trajectory. We highlight emerging trends in foundation models, biophysical modelling, synthetic data, and drug development and discuss regulatory, operational, and ethical hurdles across data, translation, and implementation gaps. Near-term clinical translation depends on scaling validated AI solutions for well defined clinical tasks. In contrast, more experimental AI solutions offer broader potential but require technical refinement and resolution of data and regulatory challenges. Addressing both general and neuro-oncology-specific issues is essential to unlock the full potential of AI and ensure its responsible, effective, and needs-based integration into neuro-oncological practice.

Jiang Y, Peng R, Liu X, Xu M, Shen H, Yu Z, Jiang Z

pubmed logopapersAug 8 2025
Detection of diabetic peripheral neuropathy (DPN) is critical for preventing severe complications. Machine learning (ML) and radiomics offer promising approaches for the diagnosis of DPN; however, their application in ultrasound-based detection of DPN remains limited. Moreover, there is no consensus on whether longitudinal or transverse ultrasound planes provide more robust radiomic features for nerve evaluation. This study aimed to analyze and compare radiomic features from different ultrasound planes of the tibial nerve and to develop a co-plane fusion ML model to enhance the diagnostic accuracy of DPN. In our study, a total of 516 feet from 262 diabetics across two institutions was analyzed and stratified into a training cohort (n = 309), an internal testing cohort (n = 133), and an external testing cohort (n = 74). A total of 1316 radiomic features were extracted from both transverse and longitudinal planes of the tibial nerve. After feature selection, six ML algorithms were utilized to construct radiomics models based on transverse, longitudinal, and combined planes. The performance of these models was assessed using receiver operating characteristic curves, calibration curves, and decision curve analysis (DCA). Shapley Additive exPlanations (SHAP) were employed to elucidate the key features and their contributions to predictions within the optimal model. The co-plane Support Vector Machine (SVM) model exhibited superior performance, achieving AUC values of 0.90 (95% CI: 0.86-0.93), 0.88 (95% CI: 0.84-0.91), and 0.70 (95% CI: 0.64-0.76) in the training, internal testing, and external testing cohorts, respectively. These results significantly exceeded those of the single-plane models, as determined by the DeLong test (P < 0.05). Calibration curves and DCA curve indicated a good model fit and suggested potential clinical utility. Furthermore, SHAP were employed to explain the model. The co-plane SVM model, which integrates transverse and longitudinal radiomic features of the tibial nerve, demonstrated optimal performance in DPN prediction, thereby significantly enhancing the efficacy of DPN diagnosis. This model may serve as a robust tool for noninvasive assessment of DPN, highlighting its promising applicability in clinical settings.
Page 285 of 6596585 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.