Sort by:
Page 229 of 6546537 results

Bai X, Feng M, Ma W, Liao Y

pubmed logopapersAug 25 2025
Artificial intelligence (AI) chatbots have emerged as promising tools for enhancing medical communication, yet their efficacy in interpreting complex radiological reports remains underexplored. This study evaluates the performance of AI chatbots in translating magnetic resonance imaging (MRI) reports into patient-friendly language and providing clinical recommendations. A cross-sectional analysis was conducted on 6174 MRI reports from tumor patients across three hospitals. Two AI chatbots, GPT o1-preview (Chatbot 1) and Deepseek-R1 (Chatbot 2), were tasked with interpreting reports, classifying tumor characteristics, assessing surgical necessity, and suggesting treatments. Readability was measured using Flesch-Kincaid and Gunning Fog metrics, while accuracy was evaluated by medical reviewers. Statistical analyses included Friedman and Wilcoxon signed-rank tests. Both chatbots significantly improved readability, with Chatbot 2 achieving higher Flesch-Kincaid Reading Ease scores (median: 58.70 vs. 46.00, p < 0.001) and lower text complexity. Chatbot 2 outperformed Chatbot 1 in diagnostic accuracy (92.05% vs. 89.03% for tumor classification; 95.12% vs. 84.73% for surgical necessity, p < 0.001). Treatment recommendations from Chatbot 2 were more clinically relevant (98.10% acceptable vs. 75.41%), though both demonstrated high empathy (92.82-96.11%). Errors included misinterpretations of medical terminology and occasional hallucinations. AI chatbots, particularly Deepseek-R1, effectively enhance the readability and accuracy of MRI report interpretations for patients. However, physician oversight remains critical to mitigate errors. These tools hold potential to reduce healthcare burdens but require further refinement for clinical integration.

Xiao Y, Lin W, Xie F, Liu L, Zheng G, Xiao C

pubmed logopapersAug 25 2025
This study investigates the impact of cone beam computed tomography (CBCT) image quality on radiomic analysis and evaluates the potential of deep learning-based enhancement to improve radiomic feature accuracy in nasopharyngeal cancer (NPC). The CBAMRegGAN model was trained on 114 paired CT and CBCT datasets from 114 nasopharyngeal cancer patients to enhance CBCT images, with CT images as ground truth. The dataset was split into 82 patients for training, 12 for validation, and 20 for testing. The radiomic features in 6 different categories, including first-order, gray-level co-occurrence matrix (GLCM), gray-level run-length matrix (GLRLM), gray-level size-zone matrix(GLSZM), neighbouring gray tone difference matrix (NGTDM), and gray-level dependence matrix (GLDM), were extracted from the gross tumor volume (GTV) of original CBCT, enhanced CBCT, and CT. Comparing feature errors between original and enhanced CBCT showed that deep learning-based enhancement improves radiomic feature accuracy. The CBAMRegGAN model achieved improved image quality with a peak signal-to-noise ratio (PSNR) of 29.52 ± 2.28 dB, normalized mean absolute error (NMAE) of 0.0129 ± 0.004, and structural similarity index (SSIM) of 0.910 ± 0.025 for enhanced CBCT images. This led to reduced errors in most radiomic features, with average reductions across 20 patients of 19.0%, 24.0%, 3.0%, 19%, 15.0%, and 5.0% for first-order, GLCM, GLRLM, GLSZM, NGTDM, and GLDM features. This study demonstrates that CBCT image quality significantly influences radiomic analysis, and deep learning-based enhancement techniques can effectively improve both image quality and the accuracy of radiomic features in NPC.

Handayani VW, Margareth Amiatun Ruth MS, Rulaningtyas R, Caesarardhi MR, Yudhantorro BA, Yudianto A

pubmed logopapersAug 25 2025
Accurately determining sex using features like facial bone profiles and teeth is crucial for identifying unknown victims. Lateral cephalometric radiographs effectively depict the lateral cranial structure, aiding the development of computational identification models. This study develops and evaluates a sex prediction model using cephalometric radiographs with several convolutional neural network (CNN) architectures. The primary goal is to evaluate the model's performance on standardized radiographic data and real-world cranial photographs to simulate forensic applications. Six CNN architectures-VGG16, VGG19, MobileNetV2, ResNet50V2, InceptionV3, and InceptionResNetV2-were employed to train and validate 340 cephalometric images of Indonesian individuals aged 18 to 40 years. The data were divided into training (70%), validation (15%), and testing (15%) subsets. Data augmentation was implemented to mitigate class imbalance. Additionally, a set of 40 cranial images from anatomical specimens was employed to evaluate the model's generalizability. Model performance metrics included accuracy, precision, recall, and F1-score. CNN models were trained and evaluated on 340 cephalometric images (255 females and 85 males). VGG19 and ResNet50V2 achieved high F1-scores of 95% (females) and 83% (males), respectively, using cephalometric data, highlighting their strong class-specific performance. Although the overall accuracy exceeded 90%, the F1-score better reflected model performance in this imbalanced dataset. In contrast, performance notably decreased with cranial photographs, particularly when classifying female samples. That is, while InceptionResNetV2 achieved the highest F1-score for cranial photographs (62%), misclassification of females remained significant. Confusion matrices and per-class metrics further revealed persistent issues related to data imbalance and generalization across imaging modalities. Basic CNN models perform well on standardized cephalometric images but less effectively on photographic cranial images, indicating a domain shift between image types that limits generalizability. Improving real-world forensic performance will require further optimization and more diverse training data. Not applicable.

Issac BM, Kumar SN, Zafar S, Shakil KA, Wani MA

pubmed logopapersAug 25 2025
With the exponential growth of big data in domains such as telemedicine and digital forensics, the secure transmission of sensitive medical information has become a critical concern. Conventional steganographic methods often fail to maintain diagnostic integrity or exhibit robustness against noise and transformations. In this study, we propose a novel deep learning-based steganographic framework that combines Squeeze-and-Excitation (SE) blocks, Inception modules, and residual connections to address these challenges. The encoder integrates dilated convolutions and SE attention to embed secret medical images within natural cover images, while the decoder employs residual and multi-scale Inception-based feature extraction for accurate reconstruction. Designed for deployment on NVIDIA Jetson TX2, the model ensures real-time, low-power operation suitable for edge healthcare applications. Experimental evaluation on MRI and OCT datasets demonstrates the model's efficacy, achieving Peak Signal-to-Noise Ratio (PSNR) values of 39.02 and 38.75, and Structural Similarity Index (SSIM) values of 0.9757, confirming minimal visual distortion. This research contributes to advancing secure, high-capacity steganographic systems for practical use in privacy-sensitive environments.

Dassanayake M, Lopez A, Reader A, Cook GJR, Mingels C, Rahmim A, Seifert R, Alberts I, Yousefirizi F

pubmed logopapersAug 25 2025
This article reviews recent advancements in PET/computed tomography imaging, emphasizing the transformative impact of total-body and long-axial field-of-view scanners, which offer increased sensitivity, larger coverage, and faster, lower-dose imaging. It highlights the growing role of artificial intelligence (AI) in enhancing image reconstruction, resolution, and multi-tracer applications, enabling rapid processing and improved quantification. AI-driven techniques, such as super-resolution, positron range correction, and motion compensation, are improving lesion detectability and image quality. The review underscores the potential of these innovations to revolutionize clinical and research PET imaging, while also noting the challenges in validation and implementation for routine practice.

Dong L, Cai X, Ge H, Sun L, Pan X, Sun F, Meng Q

pubmed logopapersAug 25 2025
To develop and evaluate a Dual-modality Complementary Feature Attention Network (DCFAN) that integrates spatial and stiffness information from B-mode ultrasound and shear wave elastography (SWE) for improved breast tumor classification and axillary lymph node (ALN) metastasis prediction. A total of 387 paired B-mode and SWE images from 218 patients were retrospectively analyzed. The proposed DCFAN incorporates attention mechanisms to effectively fuse structural features from B-mode ultrasound with stiffness features from SWE. Two classification tasks were performed: (1) differentiating benign from malignant tumors, and (2) classifying benign tumors, malignant tumors without ALN metastasis, and malignant tumors with ALN metastasis. Model performance was assessed using accuracy, sensitivity, specificity, and AUC, and compared with conventional CNN-based models and two radiologists with varying experience. In Task 1, DCFAN achieved an accuracy of 94.36% ± 1.45% and the highest AUC of 0.97. In Task 2, it attained 91.70% ± 3.77% accuracy and an average AUC of 0.83. The multimodal approach significantly outperformed the single-modality models in both tasks. Notably, in Task 1, DCFAN demonstrated higher specificity (94.9%) compared to the experienced radiologist (p = 0.002), and yielded higher F1-scores than both radiologists. It also outperformed several state-of-the-art deep learning models in diagnostic accuracy. DCFAN demonstrated robust and superior performance over existing CNN-based methods and radiologists in both breast tumor classification and ALN metastasis prediction. This approach may serve as a valuable assistive tool to enhance diagnostic accuracy in breast ultrasound.

Guo L, Liu C, Soultanidis G

pubmed logopapersAug 25 2025
Motion in clinical positron emission tomography (PET) examinations degrades image quality and quantification, requiring tailored correction strategies. Recent advancements integrate external devices and/or data-driven motion tracking with image registration and motion modeling, particularly deep learning-based methods, to address complex motion scenarios. The development of total-body PET systems with long axial field-of-view enables advanced motion correction by leveraging extended coverage and continuous acquisition. These innovations enhance the accuracy of motion estimation and correction across various clinical applications, improve quantitative reliability in static and dynamic imaging, and enable more precise assessments in oncology, neurology, and cardiovascular PET studies.

Wang X, Li P, Li Y, Zhang R, Duan F, Wang D

pubmed logopapersAug 25 2025
To develop and validate predictive models based on <sup>18</sup>F-fluorodeoxyglucose positron emission tomography/computed tomography (<sup>18</sup>F-FDG PET/CT) radiomics and a clinical model for differentiating invasive adenocarcinoma (IAC) from non-invasive ground-glass nodules (GGNs) in early-stage lung cancer. A total of 164 patients with GGNs histologically confirmed as part of the lung adenocarcinoma spectrum (including both invasive and non-invasive subtypes) who underwent preoperative <sup>18</sup>F-FDG PET/CT and surgery. Radiomic features were extracted from PET and CT images. Models were constructed using support vector machine (SVM), random forest (RF), and extreme gradient boosting (XGBoost). Five predictive models (CT, PET, PET/CT, Clinical, Combined) were evaluated using receiver operating characteristic (ROC) curves, decision curve analysis (DCA), and calibration curves. Statistical comparisons were performed using DeLong's test, net reclassification improvement (NRI), and integrated discrimination improvement (IDI). The Combined model, integrating PET/CT radiomic features with the clinical model, achieved the highest diagnostic performance (AUC: 0.950 in training, 0.911 in test). It consistently showed superior IDI and NRI across both cohorts and significantly outperformed the clinical model (DeLong p = 0.027), confirming its enhanced predictive power through multimodal integration. A clinical nomogram was constructed from the final model to support individualized risk stratification. Integrating PET/CT radiomic features with a clinical model significantly enhances the preoperative prediction of GGN invasiveness. This multimodal image data may assist in preoperative risk stratification and support personalized surgical decision-making in early-stage lung adenocarcinoma.

Utsav Ratna Tuladhar, Richard Simon, Doran Mix, Michael Richards

arxiv logopreprintAug 25 2025
Abdominal aortic aneurysms (AAA) pose a significant clinical risk due to their potential for rupture, which is often asymptomatic but can be fatal. Although maximum diameter is commonly used for risk assessment, diameter alone is insufficient as it does not capture the properties of the underlying material of the vessel wall, which play a critical role in determining the risk of rupture. To overcome this limitation, we propose a deep learning-based framework for elasticity imaging of AAAs with 2D ultrasound. Leveraging finite element simulations, we generate a diverse dataset of displacement fields with their corresponding modulus distributions. We train a model with U-Net architecture and normalized mean squared error (NMSE) to infer the spatial modulus distribution from the axial and lateral components of the displacement fields. This model is evaluated across three experimental domains: digital phantom data from 3D COMSOL simulations, physical phantom experiments using biomechanically distinct vessel models, and clinical ultrasound exams from AAA patients. Our simulated results demonstrate that the proposed deep learning model is able to reconstruct modulus distributions, achieving an NMSE score of 0.73\%. Similarly, in phantom data, the predicted modular ratio closely matches the expected values, affirming the model's ability to generalize to phantom data. We compare our approach with an iterative method which shows comparable performance but higher computation time. In contrast, the deep learning method can provide quick and effective estimates of tissue stiffness from ultrasound images, which could help assess the risk of AAA rupture without invasive procedures.

Le Zhang, Fuping Wu, Arun Thirunavukarasu, Kevin Bronik, Thomas Nichols, Bartlomiej W. Papiez

arxiv logopreprintAug 25 2025
Large annotated datasets are vital for training segmentation models, but pixel-level labeling is time-consuming, error-prone, and often requires scarce expert annotators, especially in medical imaging. In contrast, coarse annotations are quicker, cheaper, and easier to produce, even by non-experts. In this paper, we propose to use coarse drawings from both positive (target) and negative (background) classes in the image, even with noisy pixels, to train a convolutional neural network (CNN) for semantic segmentation. We present a method for learning the true segmentation label distributions from purely noisy coarse annotations using two coupled CNNs. The separation of the two CNNs is achieved by high fidelity with the characters of the noisy training annotations. We propose to add a complementary label learning that encourages estimating negative label distribution. To illustrate the properties of our method, we first use a toy segmentation dataset based on MNIST. We then present the quantitative results of experiments using publicly available datasets: Cityscapes dataset for multi-class segmentation, and retinal images for medical applications. In all experiments, our method outperforms state-of-the-art methods, particularly in the cases where the ratio of coarse annotations is small compared to the given dense annotations.
Page 229 of 6546537 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.