Sort by:
Page 22 of 2982975 results

Structured Report Generation for Breast Cancer Imaging Based on Large Language Modeling: A Comparative Analysis of GPT-4 and DeepSeek.

Chen K, Hou X, Li X, Xu W, Yi H

pubmed logopapersAug 7 2025
The purpose of this study is to compare the performance of GPT-4 and DeepSeek large language models in generating structured breast cancer multimodality imaging integrated reports from free-text radiology reports including mammography, ultrasound, MRI, and PET/CT. A retrospective analysis was conducted on 1358 free-text reports from 501 breast cancer patients across two institutions. The study design involved synthesizing multimodal imaging data into structured reports with three components: primary lesion characteristics, metastatic lesions, and TNM staging. Input prompts were standardized for both models, with GPT-4 using predesigned instructions and DeepSeek requiring manual input. Reports were evaluated based on physician satisfaction using a Likert scale, descriptive accuracy including lesion localization, size, SUV, and metastasis assessment, and TNM staging correctness according to NCCN guidelines. Statistical analysis included McNemar tests for binary outcomes and correlation analysis for multiclass comparisons with a significance threshold of P < .05. Physician satisfaction scores showed strong correlation between models with r-values of 0.665 and 0.558 and P-values below .001. Both models demonstrated high accuracy in data extraction and integration. The mean accuracy for primary lesion features was 91.7% for GPT-4% and 92.1% for DeepSeek, while feature synthesis accuracy was 93.4% for GPT4 and 93.9% for DeepSeek. Metastatic lesion identification showed comparable overall accuracy at 93.5% for GPT4 and 94.4% for DeepSeek. GPT-4 performed better in pleural lesion detection with 94.9% accuracy compared to 79.5% for DeepSeek, whereas DeepSeek achieved higher accuracy in mesenteric metastasis identification at 87.5% vs 43.8% for GPT4. TNM staging accuracy exceeded 92% for T-stage and 94% for M-stage, with N-stage accuracy improving beyond 90% when supplemented with physical exam data. Both GPT-4 and DeepSeek effectively generate structured breast cancer imaging reports with high accuracy in data mining, integration, and TNM staging. Integrating these models into clinical practice is expected to enhance report standardization and physician productivity.

A novel approach for CT image smoothing: Quaternion Bilateral Filtering for kernel conversion.

Nasr M, Piórkowski A, Brzostowski K, El-Samie FEA

pubmed logopapersAug 7 2025
Denoising reconstructed Computed Tomography (CT) images without access to raw projection data remains a significant difficulty in medical imaging, particularly when utilizing sharp or medium reconstruction kernels that generate high-frequency noise. This work introduces an innovative method that integrates quaternion mathematics with bilateral filtering to resolve this issue. The proposed Quaternion Bilateral Filter (QBF) effectively maintains anatomical structures and mitigates noise caused by the kernel by expressing CT scans in quaternion form, with the red, green, and blue channels encoded together. Compared to conventional methods that depend on raw data or grayscale filtering, our approach functions directly on reconstructed sharp kernel images. It converts them to mimic the quality of soft-kernel outputs, obtained with kernels such as B30f, using paired data from the same patients. The efficacy of the QBF is evidenced by both full-reference metrics (Structural Similarity Index Measure (SSIM), Peak Signal-to-Noise Ratio (PSNR), Mean Absolute Error (MAE), and Root Mean Squared Error (RMSE)) and no-reference perceptual metrics (Naturalness Image Quality Evaluator (NIQE), Blind Referenceless Image Spatial Quality Evaluator (BRISQUE), and Perception-based Image Quality Evaluator (PIQE)). The results indicate that the QBF demonstrates improved denoising efficacy compared to traditional Bilateral Filter (BF), Non-Local Means (NLM), wavelet, and Convolutional Neural Network (CNN)-based processing, achieving an SSIM of 0.96 and a PSNR of 36.3 on B50f reconstructions. Additionally, segmentation-based visual validation verifies that QBF-filtered outputs maintain essential structural details necessary for subsequent diagnostic tasks. This study emphasizes the importance of quaternion-based filtering as a lightweight, interpretable, and efficient substitute for deep learning models in post-reconstruction CT image enhancement.

Robustness evaluation of an artificial intelligence-based automatic contouring software in daily routine practice.

Fontaine J, Suszko M, di Franco F, Leroux A, Bonnet E, Bosset M, Langrand-Escure J, Clippe S, Fleury B, Guy JB

pubmed logopapersAug 7 2025
AI-based automatic contouring streamlines radiotherapy by reducing contouring time but requires rigorous validation and ongoing daily monitoring. This study assessed how software updates affect contouring accuracy and examined how image quality variations influence AI performance. Two patient cohorts were analyzed. The software updates cohort (40 CT scans: 20 thorax, 10 pelvis, 10 H&N) compared six versions of Limbus AI contouring software. The image quality cohort (20 patients: H&N, pelvis, brain, thorax) analyzed 12 reconstructions per patient using Standard, iDose, and IMR algorithms, with simulated noise and spatial resolution (SR) degradations. AI performance was assessed using Volumetric Dice Similarity Coefficient (vDSC) and 95 % Hausdorff Distance (HD95%) with Wilcoxon tests for significance. In the software updates cohort, vDSC improved for re-trained structures across versions (mean DSC ≥ 0.75), with breast contour vDSC decreasing by 1 % between v1.5 and v1.8B3 (p > 0.05). Median HD95% values were consistently <4 mm, <5 mm, and <12 mm for H&N, pelvis, and thorax contours, respectively (p > 0.05). In the image quality cohort, no significant differences were observed between Standard, iDose, and IMR algorithms. However, noise and SR degradation significantly reduced performance: vDSC ≥ 0.9 dropped from 89 % at 2 % noise to 30 % at 20 %, and from 87 % to 70 % as SR degradation increased (p < 0.001). AI contouring accuracy improved with software updates and showed robustness to minor reconstruction variations, but it was sensitive to noise and SR degradation. Continuous validation and quality control of AI-generated contours are essential. Future studies should include a broader range of anatomical regions and larger cohorts.

MedCLIP-SAMv2: Towards universal text-driven medical image segmentation.

Koleilat T, Asgariandehkordi H, Rivaz H, Xiao Y

pubmed logopapersAug 7 2025
Segmentation of anatomical structures and pathologies in medical images is essential for modern disease diagnosis, clinical research, and treatment planning. While significant advancements have been made in deep learning-based segmentation techniques, many of these methods still suffer from limitations in data efficiency, generalizability, and interactivity. As a result, developing robust segmentation methods that require fewer labeled datasets remains a critical challenge in medical image analysis. Recently, the introduction of foundation models like CLIP and Segment-Anything-Model (SAM), with robust cross-domain representations, has paved the way for interactive and universal image segmentation. However, further exploration of these models for data-efficient segmentation in medical imaging is an active field of research. In this paper, we introduce MedCLIP-SAMv2, a novel framework that integrates the CLIP and SAM models to perform segmentation on clinical scans using text prompts, in both zero-shot and weakly supervised settings. Our approach includes fine-tuning the BiomedCLIP model with a new Decoupled Hard Negative Noise Contrastive Estimation (DHN-NCE) loss, and leveraging the Multi-modal Information Bottleneck (M2IB) to create visual prompts for generating segmentation masks with SAM in the zero-shot setting. We also investigate using zero-shot segmentation labels in a weakly supervised paradigm to enhance segmentation quality further. Extensive validation across four diverse segmentation tasks and medical imaging modalities (breast tumor ultrasound, brain tumor MRI, lung X-ray, and lung CT) demonstrates the high accuracy of our proposed framework. Our code is available at https://github.com/HealthX-Lab/MedCLIP-SAMv2.

Hybrid Neural Networks for Precise Hydronephrosis Classification Using Deep Learning.

Salam A, Naznine M, Chowdhury MEH, Agzamkhodjaev S, Tekin A, Vallasciani S, Ramírez-Velázquez E, Abbas TO

pubmed logopapersAug 7 2025
To develop and evaluate a deep learning framework for automatic kidney and fluid segmentation in renal ultrasound images, aiming to enhance diagnostic accuracy and reduce variability in hydronephrosis assessment. A dataset of 1,731 renal ultrasound images, annotated by four experienced urologists, was used for model training and evaluation. The proposed framework integrates a DenseNet201 backbone, Feature Pyramid Network (FPN), and Self-Organizing Neural Network (SelfONN) layers to enable multi-scale feature extraction and improve spatial precision. Several architectures were tested under identical conditions to ensure fair comparison. Segmentation performance was assessed using standard metrics, including Dice coefficient, precision, and recall. The framework also supported hydronephrosis classification using the fluid-to-kidney area ratio, with a threshold of 0.213 derived from prior literature. The model achieved strong segmentation performance for kidneys (Dice: 0.92, precision: 0.93, recall: 0.91) and fluid regions (Dice: 0.89, precision: 0.90, recall: 0.88), outperforming baseline methods. The classification accuracy for detecting hydronephrosis reached 94%, based on the computed fluid-to-kidney ratio. Performance was consistent across varied image qualities, reflecting the robustness of the overall architecture. This study presents an automated, objective pipeline for analyzing renal ultrasound images. The proposed framework supports high segmentation accuracy and reliable classification, facilitating standardized and reproducible hydronephrosis assessment. Future work will focus on model optimization and incorporating explainable AI to enhance clinical integration.

Artificial intelligence in forensic neuropathology: A systematic review.

Treglia M, La Russa R, Napoletano G, Ghamlouch A, Del Duca F, Treves B, Frati P, Maiese A

pubmed logopapersAug 7 2025
In recent years, Artificial Intelligence (AI) has gained prominence as a robust tool for clinical decision-making and diagnostics, owing to its capacity to process and analyze large datasets with high accuracy. More specifically, Deep Learning, and its subclasses, have shown significant potential in image processing, including medical imaging and histological analysis. In forensic pathology, AI has been employed for the interpretation of histopathological data, identifying conditions such as myocardial infarction, traumatic injuries, and heart rhythm abnormalities. This review aims to highlight key advances in AI's role, particularly machine learning (ML) and deep learning (DL) techniques, in forensic neuropathology, with a focus on its ability to interpret instrumental and histopathological data to support professional diagnostics. A systematic review of the literature regarding applications of Artificial Intelligence in forensic neuropathology was carried out according to the Preferred Reporting Item for Systematic Review (PRISMA) standards. We selected 34 articles regarding the main applications of AI in this field, dividing them into two categories: those addressing traumatic brain injury (TBI), including intracranial hemorrhage or cerebral microbleeds, and those focusing on epilepsy and SUDEP, including brain disorders and central nervous system neoplasms capable of inducing seizure activity. In both cases, the application of AI techniques demonstrated promising results in the forensic investigation of cerebral pathology, providing a valuable computer-assisted diagnostic tool to aid in post-mortem computed tomography (PMCT) assessments of cause of death and histopathological analyses. In conclusion, this paper presents a comprehensive overview of the key neuropathology areas where the application of artificial intelligence can be valuable in investigating causes of death.

Automated detection of wrist ganglia in MRI using convolutional neural networks.

Hämäläinen M, Sormaala M, Kaseva T, Salli E, Savolainen S, Kangasniemi M

pubmed logopapersAug 7 2025
To investigate feasibility of a method which combines segmenting convolutional neural networks (CNN) for the automated detection of ganglion cysts in 2D MRI of the wrist. The study serves as proof-of-concept, demonstrating a method to decrease false positives and offering an efficient solution for ganglia detection. We retrospectively analyzed 58 MRI studies with wrist ganglia, each including 2D axial, sagittal, and coronal series. Manual segmentations were performed by a radiologist and used to train CNNs for automatic segmentation of each orthogonal series. Predictions were fused into a single 3D volume using a proposed prediction fusion method. Performance was evaluated over all studies using six-fold cross-validation, comparing method variations with metrics including true positive rate, number of false positives, and F-score metrics. The proposed method reached mean TPR of 0.57, mean FP of 0.4 and mean F-score of 0.53. Fusion of series predictions decreased the number of false positives significantly but also decreased TPR values. CNNs can detect ganglion cysts in wrist MRI. The number of false positives can be decreased by a method of prediction fusion from multiple CNNs.

Optimizing contrast-enhanced abdominal MRI: A comparative study of deep learning and standard VIBE techniques.

Herold A, Mercaldo ND, Anderson MA, Mojtahed A, Kilcoyne A, Lo WC, Sellers RM, Clifford B, Nickel MD, Nakrour N, Huang SY, Tsai LL, Catalano OA, Harisinghani MG

pubmed logopapersAug 7 2025
To validate a deep learning (DL) reconstruction technique for faster post-contrast enhanced coronal Volume Interpolated Breath-hold Examination (VIBE) sequences and assess its image quality compared to conventionally acquired coronal VIBE sequences. This prospective study included 151 patients undergoing clinically indicated upper abdominal MRI acquired on 3 T scanners. Two coronal T1 fat-suppressed VIBE sequences were acquired: a DL-reconstructed sequence (VIBE<sub>DL</sub>) and a standard sequence (VIBE<sub>SD</sub>). Three radiologists independently evaluated six image quality parameters: overall image quality, perceived signal-to-noise ratio, severity of artifacts, liver edge sharpness, liver vessel sharpness, and lesion conspicuity, using a 4-point Likert scale. Inter-reader agreement was assessed using Gwet's AC2. Ordinal mixed-effects regression models were used to compare VIBE<sub>DL</sub> and VIBE<sub>SD</sub>. Acquisition times were 10.2 s for VIBE<sub>DL</sub> compared to 22.3 s for VIBE<sub>SD</sub>. VIBE<sub>DL</sub> demonstrated superior overall image quality (OR 1.95, 95 % CI: 1.44-2.65, p < 0.001), reduced image noise (OR 3.02, 95 % CI: 2.26-4.05, p < 0.001), enhanced liver edge sharpness (OR 3.68, 95 % CI: 2.63-5.15, p < 0.001), improved liver vessel sharpness (OR 4.43, 95 % CI: 3.13-6.27, p < 0.001), and better lesion conspicuity (OR 9.03, 95 % CI: 6.34-12.85, p < 0.001) compared to VIBE<sub>SD</sub>. However, VIBE<sub>DL</sub> showed increased severity of peripheral artifacts (OR 0.13, p < 0.001). VIBE<sub>DL</sub> detected 137/138 (99.3 %) focal liver lesions, while VIBE<sub>SD</sub> detected 131/138 (94.9 %). Inter-reader agreement ranged from good to very good for both sequences. The DL-reconstructed VIBE sequence significantly outperformed the standard breath-hold VIBE in image quality and lesion detection, while reducing acquisition time. This technique shows promise for enhancing the diagnostic capabilities of contrast-enhanced abdominal MRI.

X-UNet:A novel global context-aware collaborative fusion U-shaped network with progressive feature fusion of codec for medical image segmentation.

Xu S, Chen Y, Zhang X, Sun F, Chen S, Ou Y, Luo C

pubmed logopapersAug 7 2025
Due to the inductive bias of convolutions, CNNs perform hierarchical feature extraction efficiently in the field of medical image segmentation. However, the local correlation assumption of inductive bias limits the ability of convolutions to focus on global information, which has led to the performance of Transformer-based methods surpassing that of CNNs in some segmentation tasks in recent years. Although combining with Transformers can solve this problem, it also introduces computational complexity and considerable parameters. In addition, narrowing the encoder-decoder semantic gap for high-quality mask generation is a key challenge, addressed in recent works through feature aggregation from different skip connections. However, this often results in semantic mismatches and additional noise. In this paper, we propose a novel segmentation method, X-UNet, whose backbones employ the CFGC (Collaborative Fusion with Global Context-aware) module. The CFGC module enables multi-scale feature extraction and effective global context modeling. Simultaneously, we employ the CSPF (Cross Split-channel Progressive Fusion) module to progressively align and fuse features from corresponding encoder and decoder stages through channel-wise operations, offering a novel approach to feature integration. Experimental results demonstrate that X-UNet, with fewer computations and parameters, exhibits superior performance on various medical image datasets.The code and models are available on https://github.com/XSJ0410/X-UNet.

Predictive Modeling of Osteonecrosis of the Femoral Head Progression Using MobileNetV3_Large and Long Short-Term Memory Network: Novel Approach.

Kong G, Zhang Q, Liu D, Pan J, Liu K

pubmed logopapersAug 6 2025
The assessment of osteonecrosis of the femoral head (ONFH) often presents challenges in accuracy and efficiency. Traditional methods rely on imaging studies and clinical judgment, prompting the need for advanced approaches. This study aims to use deep learning algorithms to enhance disease assessment and prediction in ONFH, optimizing treatment strategies. The primary objective of this research is to analyze pathological images of ONFH using advanced deep learning algorithms to evaluate treatment response, vascular reconstruction, and disease progression. By identifying the most effective algorithm, this study seeks to equip clinicians with precise tools for disease assessment and prediction. Magnetic resonance imaging (MRI) data from 30 patients diagnosed with ONFH were collected, totaling 1200 slices, which included 675 slices with lesions and 225 normal slices. The dataset was divided into training (630 slices), validation (135 slices), and test (135 slices) sets. A total of 10 deep learning algorithms were tested for training and optimization, and MobileNetV3_Large was identified as the optimal model for subsequent analyses. This model was applied for quantifying vascular reconstruction, evaluating treatment responses, and assessing lesion progression. In addition, a long short-term memory (LSTM) model was integrated for the dynamic prediction of time-series data. The MobileNetV3_Large model demonstrated an accuracy of 96.5% (95% CI 95.1%-97.8%) and a recall of 94.8% (95% CI 93.2%-96.4%) in ONFH diagnosis, significantly outperforming DenseNet201 (87.3%; P<.05). Quantitative evaluation of treatment responses showed that vascularized bone grafting resulted in an average increase of 12.4 mm in vascular length (95% CI 11.2-13.6 mm; P<.01) and an increase of 2.7 in branch count (95% CI 2.3-3.1; P<.01) among the 30 patients. The model achieved an AUC of 0.92 (95% CI 0.90-0.94) for predicting lesion progression, outperforming traditional methods like ResNet50 (AUC=0.85; P<.01). Predictions were consistent with clinical observations in 92.5% of cases (24/26). The application of deep learning algorithms in examining treatment response, vascular reconstruction, and disease progression in ONFH presents notable advantages. This study offers clinicians a precise tool for disease assessment and highlights the significance of using advanced technological solutions in health care practice.
Page 22 of 2982975 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.