Sort by:
Page 110 of 3543538 results

Thyroid Volume Measurement With AI-Assisted Freehand 3D Ultrasound Compared to 2D Ultrasound-A Clinical Trial.

Rask KB, Makouei F, Wessman MHJ, Kristensen TT, Todsen T

pubmed logopapersAug 8 2025
Accurate thyroid volume assessment is critical in thyroid disease diagnostics, yet conventional high-resolution 2D ultrasound has limitations. Freehand 3D ultrasound with AI-assisted segmentation presents a potential advancement, but its clinical accuracy requires validation. This prospective clinical trial included 14 patients scheduled for total thyroidectomy. Preoperative thyroid volume was measured using both 2D ultrasound (ellipsoid method) and freehand 3D ultrasound with AI segmentation. Postoperative thyroid volume, determined via the water displacement method, served as the reference standard. The median postoperative thyroid volume was 14.8 mL (IQR 8.8-20.2). The median volume difference was 1.7 mL (IQR 1.2-3.3) for 3D ultrasound and 3.6 mL (IQR 2.3-6.6) for 2D ultrasound (p = 0.02). The inter-operator reliability coefficient for 3D ultrasound was 0.986 (p < 0.001). These findings suggest that freehand 3D ultrasound with AI-assisted segmentation provides superior accuracy and reproducibility compared to 2D ultrasound and may enhance clinical thyroid volume assessment. ClinicalTrials.gov identifier: NCT05510609.

Ensemble deep learning model for early diagnosis and classification of Alzheimer's disease using MRI scans.

Robinson Jeyapaul S, Kombaiya S, Jeya Kumar AK, Stanley VJ

pubmed logopapersAug 8 2025
BackgroundAlzheimer's disease (AD) is an irreversible neurodegenerative disorder characterized by progressive cognitive and memory decline. Accurate prediction of high-risk individuals enables early detection and better patient care.ObjectiveThis study aims to enhance MRI-based AD classification through advanced image preprocessing, optimal feature selection, and ensemble deep learning techniques.MethodsThe study employs advanced image preprocessing techniques such as normalization, affine transformation, and denoising to improve MRI quality. Brain structure segmentation is performed using the adaptive DeepLabV3 + approach for precise AD diagnosis. A novel optimal feature selection framework, H-IBMFO, integrates the Improved Beluga Whale Optimizer and Manta Foraging Optimization. An ensemble deep learning model combining MobileNet V2, DarkNet, and ResNet is used for classification. MATLAB is utilized for implementation.ResultsThe proposed system achieves 98.7% accuracy, with 98% precision, 98% sensitivity, 99% specificity, and 98% F-measure, demonstrating superior classification performance with minimal false positives and negatives.ConclusionsThe study establishes an efficient framework for AD classification, significantly improving early detection through optimized feature selection and deep learning. The high accuracy and reliability of the system validate its effectiveness in diagnosing AD stages.

Deep learning-based image enhancement for improved black blood imaging in brain metastasis.

Oh G, Paik S, Jo SW, Choi HJ, Yoo RE, Choi SH

pubmed logopapersAug 8 2025
To evaluate the utility of a deep learning (DL)-based image enhancement for improving the image quality and diagnostic performance of 3D contrast-enhanced T1-weighted black blood (BB) MR imaging for brain metastases. This retrospective study included 126 patients with and 121 patients without brain metastasis who underwent 3-T MRI examinations. Commercially available DL-based MR image enhancement software was utilized for image post-processing. The signal-to-noise ratio (SNR) and contrast-to-noise ratio (CNR) of enhancing lesions were measured. For qualitative assessment and diagnostic performance evaluation, two radiologists graded the overall image quality, noise, and artifacts of each image and the conspicuity of visible lesions. The Wilcoxon signed-rank test and regression analyses with generalized estimating equations (GEEs) were used for statistical analysis. For MR images that were not previously processed using other DL-based methods, SNR and CNR were higher in the DL-enhanced images than in the standard images (438.3 vs. 661.1, p < 0.01; 173.9 vs. 223.5, p < 0.01). Overall image quality and noise were improved in the DL images (p < 0.01, average score-5 proportion 38% vs. 65%; p < 0.01, 43% vs. 74%), whereas artifacts did not significantly differ (p ≥ 0.07). Sensitivity was increased after post-processing from 79 to 86% (p = 0.02), especially for lesions smaller than 5 mm (69 to 78%, p = 0.03), and changes in specificity (p = 0.24) and average false-positive (FP) count (p = 0.18) were not significant. DL image enhancement improves the image quality and diagnostic performance of 3D contrast-enhanced T1-weighted BB MR imaging for the detection of small brain metastases. Question Can deep learning (DL)-based image enhancement improve the image quality and diagnostic performance of 3D contrast-enhanced T1-weighted black blood (BB) MR imaging for brain metastases? Findings DL-based image enhancement improved image quality of thin slice BB MR images and sensitivity for brain metastasis, particularly for lesions smaller than 5 mm. Clinical relevance DL-based image enhancement on BB images may assist in the accurate diagnosis of brain metastasis by achieving better sensitivity while maintaining comparable specificity.

Medical application driven content based medical image retrieval system for enhanced analysis of X-ray images.

Saranya E, Chinnadurai M

pubmed logopapersAug 8 2025
By carefully analyzing latent image properties, content-based image retrieval (CBIR) systems are able to recover pertinent images without relying on text descriptions, natural language tags, or keywords related to the image. This search procedure makes it quite easy to automatically retrieve images in huge, well-balanced datasets. However, in the medical field, such datasets are usually not available. This study proposed an advanced DL technique to enhance the accuracy of image retrieval in complex medical datasets. The proposed model can be integrated into five stages, namely pre-processing, decomposing the images, feature extraction, dimensionality reduction, and classification with an image retrieval mechanism. The hybridized Wavelet-Hadamard Transform (HWHT) was utilized to obtain both low and high frequency detail for analysis. In order to extract the main characteristics, the Gray Level Co-occurrence Matrix (GLCM) was employed. Furthermore, to minimize feature complexity, Sine chaos based artificial rabbit optimization (SCARO) was utilized. By employing the Bhattacharyya Coefficient for improved similarity matching, the Bhattacharya Context performance aware global attention-based Transformer (BCGAT) improves classification accuracy. The experimental results proved that the COVID-19 Chest X-ray image dataset attained higher accuracy, precision, recall, and F1-Score of 99.5%, 97.1%, 97.1%, and 97.1%, 97.1%, respectively. However, the chest x-ray image (pneumonia) dataset has attained higher accuracy, precision, recall, and F1-score values of 98.60%, 98.49%, 97.40%, and 98.50%, respectively. For the NIH chest X-ray dataset, the accuracy value is 99.67%.

Non-invasive prediction of the secondary enucleation risk in uveal melanoma based on pretreatment CT and MRI prior to stereotactic radiotherapy.

Yedekci Y, Arimura H, Jin Y, Yilmaz MT, Kodama T, Ozyigit G, Yazici G

pubmed logopapersAug 8 2025
The aim of this study was to develop a radiomic model to non-invasively predict the risk of secondary enucleation (SE) in patients with uveal melanoma (UM) prior to stereotactic radiotherapy using pretreatment computed tomography (CT) and magnetic resonance (MR) images. This retrospective study encompasses a cohort of 308 patients diagnosed with UM who underwent stereotactic radiosurgery (SRS) or fractionated stereotactic radiotherapy (FSRT) using the CyberKnife system (Accuray, Sunnyvale, CA, USA) between 2007 and 2018. Each patient received comprehensive ophthalmologic evaluations, including assessment of visual acuity, anterior segment examination, fundus examination, and ultrasonography. All patients were followed up for a minimum of 5 years. The cohort was composed of 65 patients who underwent SE (SE+) and 243 who did not (SE-). Radiomic features were extracted from pretreatment CT and MR images. To develop a robust predictive model, four different machine learning algorithms were evaluated using these features. The stacking model utilizing CT + MR radiomic features achieved the highest predictive performance, with an area under the curve (AUC) of 0.90, accuracy of 0.86, sensitivity of 0.81, and specificity of 0.90. The feature of robust mean absolute deviation derived from the Laplacian-of-Gaussian-filtered MR images was identified as the most significant predictor, demonstrating a statistically significant difference between SE+ and SE- cases (p = 0.005). Radiomic analysis of pretreatment CT and MR images can non-invasively predict the risk of SE in UM patients undergoing SRS/FSRT. The combined CT + MR radiomic model may inform more personalized therapeutic decisions, thereby reducing unnecessary radiation exposure and potentially improving patient outcomes.

A Deep Learning Model to Detect Acute MCA Occlusion on High Resolution Non-Contrast Head CT.

Fussell DA, Lopez JL, Chang PD

pubmed logopapersAug 8 2025
To assess the feasibility and accuracy of a deep learning (DL) model to identify acute middle cerebral artery (MCA) occlusion using high resolution non-contrast CT (NCCT) imaging data. In this study, a total of 4,648 consecutive exams (July 2021 to December 2023) were retrospectively used for model training and validation, while an additional 1,011 consecutive exams (January 2024 to August 2024) were used for independent testing. Using high-resolution NCCT acquired at 1.0 mm slice thickness or less, MCA thrombus was labeled using same day CTA as ground-truth. A 3D DL model was trained for per-voxel thrombus segmentation, with the sum of positive voxels used to estimate likelihood of acute MCA occlusion. For detection of MCA M1 segment acute occlusion, the model yielded an AUROC of 0.952 [0.904 -1.00], accuracy of 93.6%[88.1 -98.2], sensitivity of 90.9% [83.1 -100], and specificity of 93.6% [88.0 -98.3]. Inclusion of M2 segment occlusions reduced performance only slightly, yielding an AUROC of 0.884 [0.825 -0.942], accuracy of 93.2% [85.1 -97.2], sensitivity of 77.4% [69.3 92.2], and specificity of 93.6% [85.1 -97.8]. A DL model can detect acute MCA occlusion from high resolution NCCT with accuracy approaching that of CTA. Using this tool, a majority of candidate thrombectomy patients may be identified with NCCT alone, which could aid stroke triage in settings that lack CTA or are otherwise resource constrained. DL= deep learning.

Enhancing B-mode-based breast cancer diagnosis via cross-attention fusion of H-scan and Nakagami imaging with multi-CAM-QUS-driven XAI.

Mondol SS, Hasan MK

pubmed logopapersAug 8 2025
B-mode ultrasound is widely employed for breast lesion diagnosis due to its affordability, widespread availability, and effectiveness, particularly in cases of dense breast tissue where mammography may be less sensitive. However, it disregards critical tissue information embedded in raw radiofrequency (RF) data. While both modalities have demonstrated promise in Computer-Aided Diagnosis (CAD), their combined potential remains largely unexplored.&#xD;Approach.This paper presents an automated breast lesion classification network that utilizes H-scan and Nakagami parametric images derived from RF ultrasound signals, combined with machine-generated B-mode images, seamlessly integrated through a Multi Modal Cross Attention Fusion (MM-CAF) mechanism to extract complementary information. The proposed architecture also incorporates an attention-guided modified InceptionV3 for feature extraction, a Knowledge-Guided Cross-Modality Learning (KGCML) module for inter‑modal knowledge sharing, and Attention-Driven Context Enhancement (ADCE) modules to improve contextual understanding and fusion with the classification network. The network employs categorical cross-entropy loss, a Multi-CAM-based loss to guide learning toward accurate lesion-specific features, and a Multi-QUS-based loss to embed clinically meaningful domain knowledge and effectively distinguishing between benign and malignant lesions, all while supporting explainable AI (XAI) principles.&#xD;Main results. Experiments conducted on multi-center breast ultrasound datasets--BUET-BUSD, ATL, and OASBUD--characterized by demographic diversity, demonstrate the effectiveness of the proposed approach, achieving classification accuracies of 92.54%, 89.93%, and 90.0%, respectively, along with high interpretability and trustworthiness. These results surpass those of existing methods based on B-mode and/or RF data, highlighting the superior performance and robustness of the proposed technique. By integrating complementary RF‑derived information with B‑mode imaging with pseudo‑segmentation and domain‑informed loss functions, our method significantly boosts lesion classification accuracy-enabling fully automated, explainable CAD and paving the way for widespread clinical adoption of AI‑driven breast screening.

LLM-Based Extraction of Imaging Features from Radiology Reports: Automating Disease Activity Scoring in Crohn's Disease.

Dehdab R, Mankertz F, Brendel JM, Maalouf N, Kaya K, Afat S, Kolahdoozan S, Radmard AR

pubmed logopapersAug 8 2025
Large Language Models (LLMs) offer a promising solution for extracting structured clinical information from free-text radiology reports. The Simplified Magnetic Resonance Index of Activity (sMARIA) is a validated scoring system used to quantify Crohn's disease (CD) activity based on Magnetic Resonance Enterography (MRE) findings. This study aims to evaluate the performance of two advanced LLMs in extracting key imaging features and computing sMARIA scores from free-text MRE reports. This retrospective study included 117 anonymized free-text MRE reports from patients with confirmed CD. ChatGPT (GPT-4o) and DeepSeek (DeepSeek-R1) were prompted using a structured input designed to extract four key radiologic features relevant to sMARIA: bowel wall thickness, mural edema, perienteric fat stranding, and ulceration. LLM outputs were evaluated against radiologist annotations at both the segment and feature levels. Segment-level agreement was assessed using accuracy, mean absolute error (MAE) and Pearson correlation. Feature-level performance was evaluated using sensitivity, specificity, precision, and F1-score. Errors including confabulations were recorded descriptively. ChatGPT achieved a segment-level accuracy of 98.6%, MAE of 0.17, and Pearson correlation of 0.99. DeepSeek achieved 97.3% accuracy, MAE of 0.51, and correlation of 0.96. At the feature level, ChatGPT yielded an F1-score of 98.8% (precision 97.8%, sensitivity 99.9%), while DeepSeek achieved 97.9% (precision 96.0%, sensitivity 99.8%). LLMs demonstrate near-human accuracy in extracting structured information and computing sMARIA scores from free-text MRE reports. This enables automated assessment of CD activity without altering current reporting workflows, supporting longitudinal monitoring and large-scale research. Integration into clinical decision support systems may be feasible in the future, provided appropriate human oversight and validation are ensured.

A Co-Plane Machine Learning Model Based on Ultrasound Radiomics for the Evaluation of Diabetic Peripheral Neuropathy.

Jiang Y, Peng R, Liu X, Xu M, Shen H, Yu Z, Jiang Z

pubmed logopapersAug 8 2025
Detection of diabetic peripheral neuropathy (DPN) is critical for preventing severe complications. Machine learning (ML) and radiomics offer promising approaches for the diagnosis of DPN; however, their application in ultrasound-based detection of DPN remains limited. Moreover, there is no consensus on whether longitudinal or transverse ultrasound planes provide more robust radiomic features for nerve evaluation. This study aimed to analyze and compare radiomic features from different ultrasound planes of the tibial nerve and to develop a co-plane fusion ML model to enhance the diagnostic accuracy of DPN. In our study, a total of 516 feet from 262 diabetics across two institutions was analyzed and stratified into a training cohort (n = 309), an internal testing cohort (n = 133), and an external testing cohort (n = 74). A total of 1316 radiomic features were extracted from both transverse and longitudinal planes of the tibial nerve. After feature selection, six ML algorithms were utilized to construct radiomics models based on transverse, longitudinal, and combined planes. The performance of these models was assessed using receiver operating characteristic curves, calibration curves, and decision curve analysis (DCA). Shapley Additive exPlanations (SHAP) were employed to elucidate the key features and their contributions to predictions within the optimal model. The co-plane Support Vector Machine (SVM) model exhibited superior performance, achieving AUC values of 0.90 (95% CI: 0.86-0.93), 0.88 (95% CI: 0.84-0.91), and 0.70 (95% CI: 0.64-0.76) in the training, internal testing, and external testing cohorts, respectively. These results significantly exceeded those of the single-plane models, as determined by the DeLong test (P < 0.05). Calibration curves and DCA curve indicated a good model fit and suggested potential clinical utility. Furthermore, SHAP were employed to explain the model. The co-plane SVM model, which integrates transverse and longitudinal radiomic features of the tibial nerve, demonstrated optimal performance in DPN prediction, thereby significantly enhancing the efficacy of DPN diagnosis. This model may serve as a robust tool for noninvasive assessment of DPN, highlighting its promising applicability in clinical settings.

Artificial intelligence in radiology, nuclear medicine and radiotherapy: Perceptions, experiences and expectations from the medical radiation technologists in Central and South America.

Mendez-Avila C, Torre S, Arce YV, Contreras PR, Rios J, Raza NO, Gonzalez H, Hernandez YC, Cabezas A, Lucero M, Ezquerra V, Malamateniou C, Solis-Barquero SM

pubmed logopapersAug 8 2025
Artificial intelligence (AI) has been growing in the field of medical imaging and clinical practice. It is essential to comprehend the perceptions, experiences, and expectations regarding AI implementation among medical radiation technologists (MRTs) working in radiology, nuclear medicine, and radiotherapy. Some global studies tend to inform about AI implementation, but there is almost no information from Central and South American professionals. This study aimed to understand the perceptions of the impact of AI on the MRTs, as well as the varying experiences and expectations these professionals have regarding its implementation. An online survey was conducted among Central and South American MRTs for the collection of qualitative data concerning the primary perceptions regarding the implementation of AI in radiology, nuclear medicine, and radiotherapy. The analysis considered descriptive statistics in closed-ended questions and dimension codification for open-ended responses. A total of 398 valid responses were obtained, and it was determined that 98.5 % (n = 392) of the respondents agreed with the implementation of AI in clinical practice. The primary contributions of AI that were identified were the optimization of processes, greater diagnostic accuracy, and the possibility of job expansion. On the other hand, concerns were raised regarding the delay in providing training opportunities and limited avenues for learning in this domain, the displacement of roles, and dehumanization in clinical practice. This sample of participants likely represents mostly professionals who have more AI knowledge than others. It is therefore important to interpret these results with caution. Our findings indicate strong professional confidence in AI's capacity to improve imaging quality while maintaining patient safety standards. However, user resistance may disturb implementation efforts. Our results highlight the dual need for (a) comprehensive professional training programs and (b) user education initiatives that demonstrate AI's clinical value in radiology. We therefore recommend a carefully structured, phased AI implementation approach, guided by evidence-based guidelines and validated training protocols from existing research. AI is already present in medical imaging, but its effective implementations depend on building acceptance and trust through education and training, enabling MRTs to use it safely for patient benefit.
Page 110 of 3543538 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.