Sort by:
Page 1 of 7237224 results
Next

Han Y, Peng L, Zhang G, Yang S, Ji C, Gu H, Wang X

pubmed logopapersDec 10 2025
To investigate the feasibility of using 60 kVp coronary CT angiography (CCTA) combined with deep learning-based CT reconstruction as a screening tool on asymptomatic patients. A total of 156 asymptomatic patients (body mass index, 24.4 ± 2.2 kg/m2) with at least one coronary artery disease (CAD) risk factor were prospectively enrolled for taking an experimental ultra-low dose 60 kVp CCTA followed by a routine 120 kVp CCTA. Stenosis detection, plaque analysis, and image quality assessment were performed on both scans, with 120 kVp CCTA serving as the reference. The mean effective dose and mean contrast medium (CM) dosage were 0.4 ± 0.1 mSv and 27.0 ± 3.2 mL, respectively, for 60 kVp CCTA, corresponding to a 91.5% and 50.0% reduction as compared with 120 kVp CCTA. In both analyses for all plaque types and noncalcific plaques, the sensitivity, specificity, and accuracy in stenosis detection were >92% with 60 kVp CCTA on per-segment, per-vessel, and per-patient basis, and in particular, the negative predictive value was ≥ 97%. However, compared to 120 kVp CCTA, 60 kVp CCTA led to a significant overestimation in plaque volume and stenosis severity (P<0.01), as well as inferior subjective scores regarding vessel and lumen delineation (P<0.05). Despite overestimation in plaque volume and stenosis severity, 60 kVp CCTA showed excellent stenosis detection capability with ultra-low radiation dose and reduced CM dosage that may potentially be adopted as a screening tool for asymptomatic patients in routine practice.

Song W, Tang F, Marshall H, Fong KM, Liu F

pubmed logopapersDec 9 2025
Early and accurate detection of pulmonary nodules in computed tomography (CT) scans is critical for reducing lung cancer mortality. While convolutional neural networks (CNNs) and Transformer-based architectures have been widely used for this task, they often suffer from insufficient global context awareness, quadratic complexity, and dependence on post-processing steps such as non-maximum suppression (NMS). This study aims to develop a novel 3D lung nodule detection framework that balances local and global contextual awareness with low computational complexity, while minimizing reliance on manual threshold tuning and redundant post-processing. We propose FCMamba, a flexible connected visual statespace model adapted from the recently introduced Mamba architecture. To enhance spatial modelling, we introduce a flexible path encoding strategy that reorders 3D feature sequences adaptively based on input relevance. In addition, a Top Query Matcher, guided by the Hungarian matching algorithm, is integrated into the training process to replace traditional NMS and enable end-to-end one-to-one nodule matching. The model is trained and evaluated using 10-fold cross-validation on the LIDC-IDRI dataset, which contains 888 CT scans. FCMamba outperforms several state-of-the-art methods, including CNN, Transformer, and hybrid models, across seven predefined false positives per scan (FPs/scan) levels. It achieves a sensitivity improvement of 2.6% to 20.3% at low FPs/scan (0.125) and delivers the highest CPM and FROC-AUC scores. The proposed method demonstrates balanced performance across nodule sizes, reduced false positives, and improved robustness, particularly in high-confidence predictions. FCMamba provides an efficient, scalable and accurate solution for 3D lung nodule detection. Its flexible spatial modeling and elimination of post-processing make it well-suited for clinical usage and adaptable to other medical imaging tasks.

Al Zaabi A, Alshibli R, AlAmri A, AlRuheili I, Lutfi SL

pubmed logopapersDec 9 2025
The use of large language models (LLMs) in radiology is expanding rapidly, offering new possibilities in report generation, decision support, and workflow optimization. However, a comprehensive evaluation of their applications, performance, and limitations across the radiology domain remains limited. This review aimed to map current applications of LLMs in radiology, evaluate their performance across key tasks, and identify prevailing limitations and directions for future research. A scoping review was conducted in accordance with the framework by Arksey and O'Malley framework and the PRISMA-ScR (Preferred Reporting Items for Systematic Reviews and Meta-Analyses extension for Scoping Reviews) guidelines. Three databases-PubMed, ScopusCOPUS, and IEEE Xplore-were searched for peer-reviewed studies published between January 2022 and December 2024. Eligible studies included empirical evaluations of LLMs applied to radiological data or workflows. Commentaries, reviews, and technical model proposals without evaluation were excluded. Two reviewers independently screened studies and extracted data on study characteristics, LLM type, radiological use case, data modality, and evaluation metrics. A thematic synthesis was used to identify key domains of application. No formal risk-of-bias assessment was performed, but a narrative appraisal of dataset representativeness and study quality was included. A total of 67 studies were included. (n/N, %)GPT-4 was the most frequently used model (n=28, 42%), with text-based corpora as the primary type of data used (n=43, 64%). Identified use cases fell into three thematic domains: (1) decision support (n=39, 58%), (2) report generation and summarization (n=16, 24%), and (3) workflow optimization (n=12, 18%). While LLMs demonstrated strong performance in structured-text tasks (eg, report simplification with >94% accuracy), diagnostic performance varied widely (16%-86%) and was limited by dataset bias, lack of fine tuning, and minimal clinical validation. Most studies (n=53, 79.1%) had single-center, proof-of-concept designs with limited generalizability. LLMs show strong potential for augmenting radiological workflows, particularly for structured reporting, summarization, and educational tasks. However, their diagnostic performance remains inconsistent, and current implementations lack robust external validation. Future work should prioritize prospective, multicenter validation of domain-adapted and multimodal models to support safe clinical integration.

Zhang X, Qi Y, Wang X, Chen H, Li J

pubmed logopapersDec 9 2025
Artificial intelligence (AI) techniques, particularly those using machine learning and deep learning to analyze multimodal imaging data, have shown considerable promise in enhancing preoperative prediction of extraprostatic extension (EPE) in prostate cancer. This meta-analysis compares the diagnostic performance of AI-enabled imaging techniques with that of radiologists for predicting preoperative EPE in prostate cancer. We conducted a systematic literature search in PubMed, Embase, and Web of Science up to September 2025, following PRISMA-DTA (Preferred Reporting Items for Systematic Reviews and Meta-Analysis of Diagnostic Test Accuracy) guidelines. Studies applying AI techniques to predict EPE using multiparametric magnetic resonance imaging (mpMRI) and prostate-specific membrane antigen positron emission tomography (PSMA PET) imaging were included. Sensitivity, specificity, and area under the curve (AUC) for both internal and external validation sets were extracted and pooled using a bivariate random effects model. Study quality was assessed using the modified Quality Assessment of Diagnostic Performance Studies (QUADAS-2) tool. A total of 21 studies were included in the analysis. For internal validation sets in patient-based analyses, mpMRI-based AI demonstrated a pooled sensitivity of 0.77 (95% CI 0.71-0.82), specificity of 0.71 (95% CI 0.64-0.78), and AUC of 0.81 (95% CI 0.77-0.84). In external validation, mpMRI-based AI achieved a sensitivity of 0.66 (95% CI 0.43-0.84), specificity of 0.80 (95% CI 0.64-0.90), and AUC of 0.80 (95% CI 0.77-0.84). In comparison, radiologists achieved a pooled sensitivity of 0.69 (95% CI 0.60-0.76), specificity of 0.73 (95% CI 0.66-0.78), and AUC of 0.77 (95% CI 0.73-0.80). Statistical comparisons between mpMRI-based AI and radiologists showed no significant difference in sensitivity (Z=1.61; P=.10), specificity (Z=0.43; P=.67). Conversely, the AUC of mpMRI-based AI was significantly higher than that of PSMA PET-based (Z=2.77; P=.01). PSMA PET-based AI showed moderate performance with sensitivity of 0.73 (95% CI 0.65-0.80), specificity of 0.61 (95% CI 0.30-0.85), and AUC of 0.74 (95% CI 0.70-0.77) in internal validation, and in external validation, it demonstrated sensitivity of 0.77 (95% CI 0.57-0.89) and specificity of 0.50 (95% CI 0.22-0.78), demonstrating no significant advantage over radiologists. mpMRI-based AI demonstrated improved diagnostic performance for preoperative prediction of EPE in prostate cancer compared to conventional radiological assessment, achieving higher AUC. However, PSMA PET-based AI models currently offer no significant advantage over either mpMRI-based AI or radiologists. Limitations include the retrospective design and high heterogeneity, which may introduce bias and affect generalizability. Larger, more diverse cohorts are essential for confirming these findings and optimizing the integration of AI in clinical practice.

Yu H, Dong J, Wang L, Li H, Wang M, Wang Y, Li Y, Yang Y, Ge Y, Zhang Y, Liu X, Yao Q, Guo A, Zhang Y, Li C

pubmed logopapersDec 9 2025
The goals of this study were to develop an artificial intelligence (AI)-driven automated preoperative planning system for anterior cruciate ligament (ACL) reconstruction by integrating deep learning with computed tomography (CT)-magnetic resonance imaging (MRI) image fusion and segmentation, and to evaluate its accuracy. Structures on CT and MRI scans of 200 knee joints from patients with an intact ACL (aged 18 to 50 years, 81.0% male, all ethnic Chinese) were manually annotated. Fusion of the CT and MRI images was performed using a Dual-UNet registration architecture incorporating multiscale information fusion, enabling dynamic 3D reconstruction of the fused images for ACL insertion site identification and isometry assessment. A deep-learning framework was trained to analyze the fused image to precisely optimize ACL tunnel positioning, including identifying the entrances and exits of the femoral and tibial tunnels. Criteria in the automated planning included proximity to the ideal point, coverage of the anatomical footprint area, and isometric length variation of <2 mm. The accuracy of the AI system was then validated in 36 ACL reconstructions performed in bone models by comparing the drilled femoral and tibial tunnel lengths and graft length between the tunnels with the planned values. Finally, clinical feasibility was tested in 36 patients undergoing ACL reconstruction surgery using 3D-printed patient-specific guides derived from the AI planning, with 36 conventional surgeries as controls. Deviation of tunnel positions from the planned positions was compared between the 2 groups. CT-MRI image fusion was able to generate an individualized 3D model with high segmentation accuracy (Dice coefficient = 0.864). The AI planning required 192 ± 90.2 seconds per case. In the bone model validation, the mean deviation between the planned and executed values was <1 mm for the femoral and tibial tunnel lengths and graft length between the tunnels (all p > 0.05). In the clinical testing, the AI-guided group demonstrated significantly smaller deviations from the ideal point compared with the conventional group in the deep-to-shallow (D-S), high-to-low (H-L), medial-to-lateral (M-L), and anterior-to-posterior (A-P) directions (all p < 0.05). The AI-driven segmentation of CT-MRI fusion images and automatic preoperative ACL reconstruction planning demonstrated the capability to automatically, precisely, and reproducibly generate plans for nearly ideal tunnel entry and exit points with isometric, anatomical, and individualization characteristics. This technology is expected to hold clinical potential for ACL reconstruction, including reduced complication and revision rates and enhanced postoperative function.

Enters J, Hoppe BF, Reidler P, Ingrisch M, Ricke J, Shiyam Sundar LK, Cyran C

pubmed logopapersDec 9 2025
The integration of artificial intelligence (AI) into musculoskeletal radiology has the potential to revolutionize diagnostic precision and efficiency. At the beginning of the radiological diagnostic workflow, AI algorithms lead to a significant reduction in examination time, for example in image acquisition in MRI. Certain AI algorithms achieve a diagnostic accuracy in fracture detection that is comparable to that of board-certified radiologists and perform time-consuming diagnostic analyses such as joint and axis measurements fully automatically. Generative AI based on large language models (LLM) can contribute to automation in structured reporting, with first clinical applications already available. While AI algorithms for accelerating MRI acquisition already contribute significantly to efficiency, challenges in the clinical translation of AI algorithms for automated reading support lie primarily in the low number of commercially available AI algorithms that offer a significant clinical value to quality and efficiency in the diagnostic workflow. Accepted key performance indices (KPI) for quantitatively measuring the return on investment (ROI) for the high running costs are also largely unresolved internationally.

Cai Y, Dall'Ara E, Lacroix D, Guo L

pubmed logopapersDec 9 2025
Subject-specific finite-element analysis (FEA) models enable accurate simulation of vertebral biomechanics but are often time-consuming to construct and solve under varying conditions. This study presents a novel deep learning (DL)/machine learning (ML)-based surrogate model that predicts stress distributions in vertebral bodies with high efficiency. The model integrates vertebral shape encoding and employs separate decoding branches for surface and internal nodes. It was trained on 3,960 synthetic L1 vertebrae generated via data augmentation from 42 real computed tomography (CT) scans. Evaluation on independent test samples yielded a mean absolute error (MAE) of 0.0596 MPa and an R<sup>2</sup> of 0.864 for von Mises stress. Visualization results confirm strong agreement between predicted and FEA-computed stress patterns, with localized discrepancies observed at the anteroinferior margin and pedicles. Moreover, an end-to-end automated pipeline was established based on the developed model, reducing the total processing time from 90-120 min to approximately 134-154 s per subject. These findings highlight the potential of the proposed surrogate model to facilitate rapid, subject-specific biomechanical assessments in clinical workflows.

Dournes G, Hadj Bouzid AI, Doucet K, Benlala I, Maurac A, Blanchard E, Dupin I, Berger P, Henrot P, Zysman M

pubmed logopapersDec 9 2025
Alpha-1-antitrypsin deficiency (AATD) is a rare genetic disorder leading to chronic obstructive pulmonary disease (COPD). Emphysema is the major structural damage visible on CT scans. However, there is little knowledge on the association between other structural abnormalities, such as bronchiectasis (BE), airway wall thickening (WT) or mucus plugs (MP), and clinical features. Retrospective study between 2008 and 2022 at one University Hospital of Bordeaux on all consecutive AATD patients. Bronchial and parenchymal alterations were evaluated with an (artificial intelligence) AI-driven Normalized Volume of Airway Abnormalities (NOVAA-CT) scoring system, including BE, WT, MP and emphysema quantifications. We evaluated correlations between forced expiratory volume in 1-s (FEV1%), dyspnea severity through the mMRC scale and the occurrence of at least one exacerbation in the year following CT scan. Fifty-two AATD patients were included (median FEV1: 47% (40-65)). CT features of BE, WT and MP were present in 100%, 94.2% and 59% of the study population, respectively, with a lower versus upper lung predominance (p < 0.05). WT (p < 0.001) and BE (p = 0.04) correlated with FEV1% but not mMRC (p ≥ 0.09). Conversely, MP did not correlate with FEV1% (p = 0.08) but with mMRC (p = 0.01). Emphysema strongly correlated with both FEV1% and mMRC (p < 0.001). In multivariate analysis, after adjustment for age, genotype and tobacco consumption, the best predictor of exacerbation was WT (OR = 1.12 [1.02-1.22]; p = 0.01). This study demonstrates that AI-assisted identification of structural airway abnormalities is frequent in AATD patients and carries distinct clinical significance. Among them, WT was the most robust predictor of exacerbations. Question Emphysema is the major structural damage in alpha-1-antitrypsin deficiency (AATD). Clinical associations of bronchial abnormalities such as bronchiectasis (BE), mucus plugs (MP) and wall thickness (WT) are lacking. Findings Quantitative CT of BE and WT correlated with PFT (p ≤ 0.05), while MP correlated with dyspnea scale (p = 0.01). The best predictor of exacerbation was WT (OR = 1.12 [1.02-1.24]). Clinical relevance AI-assisted identification of bronchial abnormalities is frequent in AATD patients in addition to emphysema alone and carries distinct clinical significance. These findings highlight the importance of comprehensive CT-based evaluations to better characterize disease phenotype and guide clinical management in AATD.

Zhang J, Ji Z, Zhao C, Huang M, Li M, Zhang H

pubmed logopapersDec 8 2025
Endoscopic imaging is vital in Minimally Invasive Surgery (MIS), but its utility is often compromised by specular reflections that obscure important details and hinder diagnostic accuracy. Existing methods to address these reflections face limitations, particularly those relying on color-based thresholding and the underutilization of deep learning for highlight detection. To tackle these challenges, we propose the Specular Detection Median Filtering Fusion Network (SDMFFN), a novel framework designed to detect and remove specular reflections in endoscopic images. The SDMFFN employs a two-stage process: detection and removal. In the detection phase, we utilize the enhanced Specular Transformer Unet (S-TransUnet) model integrating Atrous Spatial Pyramid Pooling (ASPP), Information Bottleneck (IB) and Convolutional Block Attention Module (CBAM) to optimize multi-scale feature extraction, which helps to achieve accurate highlight detection. In the removal phase, we improve the advanced median filtering to smooth reflective areas and integrate color information for a natural restoration. Experimental results show that our proposed SDMFFN has outperformed other methods. Our method improves visual clarity and diagnostic precision, ultimately enhancing surgical outcomes and reducing the risk of misdiagnosis by delivering high-quality, reflection-free endoscopic images. The robust performance of SDMFFN suggests its adaptability to other medical imaging modalities, paving the way for broader clinical and research applications in robotic surgery, diagnostic endoscopy and telemedicine. To promote further progress in the research, we will make the code publicly available at: https://github.com/jize123457/SDMFFN.

Wu X, Zhang X, Xiao Z, Hu L, Higashita R, Liu J

pubmed logopapersDec 8 2025
Efficient convolutional neural network (CNN) architecture design has attracted growing research interests. However, they typically apply single receptive field (RF), small asymmetric RFs, or pyramid RFs to learn different feature representations, still encountering two significant challenges in medical image classification tasks: i) They have limitations in capturing diverse lesion characteristics efficiently, e.g., tiny, coordination, small and salient, which have unique roles on the classification results, especially imbalanced medical image classification. ii) The predictions generated by those CNNs are often unfair/biased, bringing a high risk when employing them to real-world medical diagnosis conditions. To tackle these issues, we develop a new concept, Expert-Like Reparameterization of Heterogeneous Pyramid Receptive Fields (ERoHPRF), to simultaneously boost medical image classification performance and fairness. This concept aims to mimic the multi-expert consultation mode by applying the well-designed heterogeneous pyramid RF bag to capture lesion characteristics with varying significances effectively via convolution operations with multiple heterogeneous kernel sizes. Additionally, ERoHPRF introduces an expertlike structural reparameterization technique to merge its parameters with the two-stage strategy, ensuring competitive computation cost and inference speed through comparisons to a single RF. To manifest the effectiveness and generalization ability of ERoHPRF, we incorporate it into mainstream efficient CNN architectures. The extensive experiments show that our proposed ERoHPRF maintains a better trade-off than state-of-the-art methods in terms of medical image classification, fairness, and computation overhead. The code of this paper is available at https://github.com/XiaoLing12138/Expert-Like-Reparameterization-of-Heterogeneous-Pyramid-Receptive-Fields.
Page 1 of 7237224 results
Show
per page

Ready to Sharpen Your Edge?

Subscribe to join 7,100+ peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.