Sort by:
Page 69 of 1421416 results

Feasibility study of "double-low" scanning protocol combined with artificial intelligence iterative reconstruction algorithm for abdominal computed tomography enhancement in patients with obesity.

Ji MT, Wang RR, Wang Q, Li HS, Zhao YX

pubmed logopapersJul 9 2025
To evaluate the efficacy of the "double-low" scanning protocol combined with the artificial intelligence iterative reconstruction (AIIR) algorithm for abdominal computed tomography (CT) enhancement in obese patients and to identify the optimal AIIR algorithm level. Patients with a body mass index ≥ 30.00 kg/m<sup>2</sup> who underwent abdominal CT enhancement were randomly assigned to groups A or B. Group A underwent conventional protocol with the Karl 3D iterative reconstruction algorithm at levels 3-5. Group B underwent the "double-low" protocol with AIIR algorithm at levels 1-5. Radiation dose, total iodine intake, along with subjective and objective image quality were recorded. The optimal reconstruction levels for arterial-phase and portal-venous-phase images were identified. Comparisons were made in terms of radiation dose, iodine intake, and image quality. Overall, 150 patients with obesity were collected, and each group consisted of 75 cases. Karl 3D level 5 was the optimal algorithm level for group A, while AIIR level 4 was the optimal algorithm level for group B. AIIR level 4 images in group B exhibited significantly superior subjective and objective image quality than those in Karl 3D level 5 images in group A (P < 0.001). Group B showed reductions in mean CT dose index values, dose-length product, size-specific dose estimate based on water-equivalent diameter, and total iodine intake, compared with group A (P < 0.001). The "double-low" scanning protocol combined with the AIIR algorithm significantly reduces radiation dose and iodine intake during abdominal CT enhancement in obese patients. AIIR level 4 is the optimal reconstruction level for arterial-phase and portal-venous-phase in this patient population.

The correlation of liquid biopsy genomic data to radiomics in colon, pancreatic, lung and prostatic cancer patients.

Italiano A, Gautier O, Dupont J, Assi T, Dawi L, Lawrance L, Bone A, Jardali G, Choucair A, Ammari S, Bayle A, Rouleau E, Cournede PH, Borget I, Besse B, Barlesi F, Massard C, Lassau N

pubmed logopapersJul 8 2025
With the advances in artificial intelligence (AI) and precision medicine, radiomics has emerged as a promising tool in the field of oncology. Radiogenomics integrates radiomics with genomic data, potentially offering a non-invasive method for identifying biomarkers relevant to cancer therapy. Liquid biopsy (LB) has further revolutionized cancer diagnostics by detecting circulating tumor DNA (ctDNA), enabling real-time molecular profiling. This study explores the integration of radiomics and LB to predict genomic alterations in solid tumors, including lung, colon, pancreatic, and prostate cancers. A retrospective study was conducted on 418 patients from the STING trial (NCT04932525), all of whom underwent both LB and CT imaging. Predictive models were developed using an XGBoost logistic classifier, with statistical analysis performed to compare tumor volumes, lesion counts, and affected organs across molecular subtypes. Performance was evaluated using area under the curve (AUC) values and cross-validation techniques. Radiomic models demonstrated moderate-to-good performance in predicting genomic alterations. KRAS mutations were best identified in pancreatic cancer (AUC=0.97), while moderate discrimination was noted in lung (AUC=0.66) and colon cancer (AUC=0.64). EGFR mutations in lung cancer were detected with an AUC of 0.74, while BRAF mutations showed good discriminatory ability in both lung (AUC=0.79) and colon cancer (AUC=0.76). In the radiomics predictive model, AR mutations in prostate cancer showed limited discrimination (AUC = 0.63). This study highlights the feasibility of integrating radiomics and LB for non-invasive genomic profiling in solid tumors, demonstrating significant potential in patient stratification and personalized oncology care. While promising, further prospective validation is required to enhance the generalizability of these models.

LangMamba: A Language-driven Mamba Framework for Low-dose CT Denoising with Vision-language Models

Zhihao Chen, Tao Chen, Chenhui Wang, Qi Gao, Huidong Xie, Chuang Niu, Ge Wang, Hongming Shan

arxiv logopreprintJul 8 2025
Low-dose computed tomography (LDCT) reduces radiation exposure but often degrades image quality, potentially compromising diagnostic accuracy. Existing deep learning-based denoising methods focus primarily on pixel-level mappings, overlooking the potential benefits of high-level semantic guidance. Recent advances in vision-language models (VLMs) suggest that language can serve as a powerful tool for capturing structured semantic information, offering new opportunities to improve LDCT reconstruction. In this paper, we introduce LangMamba, a Language-driven Mamba framework for LDCT denoising that leverages VLM-derived representations to enhance supervision from normal-dose CT (NDCT). LangMamba follows a two-stage learning strategy. First, we pre-train a Language-guided AutoEncoder (LangAE) that leverages frozen VLMs to map NDCT images into a semantic space enriched with anatomical information. Second, we synergize LangAE with two key components to guide LDCT denoising: Semantic-Enhanced Efficient Denoiser (SEED), which enhances NDCT-relevant local semantic while capturing global features with efficient Mamba mechanism, and Language-engaged Dual-space Alignment (LangDA) Loss, which ensures that denoised images align with NDCT in both perceptual and semantic spaces. Extensive experiments on two public datasets demonstrate that LangMamba outperforms conventional state-of-the-art methods, significantly improving detail preservation and visual fidelity. Remarkably, LangAE exhibits strong generalizability to unseen datasets, thereby reducing training costs. Furthermore, LangDA loss improves explainability by integrating language-guided insights into image reconstruction and offers a plug-and-play fashion. Our findings shed new light on the potential of language as a supervisory signal to advance LDCT denoising. The code is publicly available on https://github.com/hao1635/LangMamba.

Vision Transformers-Based Deep Feature Generation Framework for Hydatid Cyst Classification in Computed Tomography Images.

Sagik M, Gumus A

pubmed logopapersJul 8 2025
Hydatid cysts, caused by Echinococcus granulosus, form progressively enlarging fluid-filled cysts in organs like the liver and lungs, posing significant public health risks through severe complications or death. This study presents a novel deep feature generation framework utilizing vision transformer models (ViT-DFG) to enhance the classification accuracy of hydatid cyst types. The proposed framework consists of four phases: image preprocessing, feature extraction using vision transformer models, feature selection through iterative neighborhood component analysis, and classification, where the performance of the ViT-DFG model was evaluated and compared across different classifiers such as k-nearest neighbor and multi-layer perceptron (MLP). Both methods were evaluated independently to assess classification performance from different approaches. The dataset, comprising five cyst types, was analyzed for both five-class and three-class classification by grouping the cyst types into active, transition, and inactive categories. Experimental results showed that the proposed VIT-DFG method achieves higher accuracy than existing methods. Specifically, the ViT-DFG framework attained an overall classification accuracy of 98.10% for the three-class and 95.12% for the five-class classifications using 5-fold cross-validation. Statistical analysis through one-way analysis of variance (ANOVA), conducted to evaluate significant differences between models, confirmed significant differences between the proposed framework and individual vision transformer models ( <math xmlns="http://www.w3.org/1998/Math/MathML"><mrow><mi>p</mi> <mo><</mo> <mn>0.05</mn></mrow> </math> ). These results highlight the effectiveness of combining multiple vision transformer architectures with advanced feature selection techniques in improving classification performance. The findings underscore the ViT-DFG framework's potential to advance medical image analysis, particularly in hydatid cyst classification, while offering clinical promise through automated diagnostics and improved decision-making.

A Meta-Analysis of the Diagnosis of Condylar and Mandibular Fractures Based on 3-dimensional Imaging and Artificial Intelligence.

Wang F, Jia X, Meiling Z, Oscandar F, Ghani HA, Omar M, Li S, Sha L, Zhen J, Yuan Y, Zhao B, Abdullah JY

pubmed logopapersJul 8 2025
This article aims to review the literature, study the current situation of using 3D images and artificial intelligence-assisted methods to improve the rapid and accurate classification and diagnosis of condylar fractures and conduct a meta-analysis of mandibular fractures. Mandibular condyle fracture is a common fracture type in maxillofacial surgery. Accurate classification and diagnosis of condylar fractures are critical to developing an effective treatment plan. With the rapid development of 3-dimensional imaging technology and artificial intelligence (AI), traditional x-ray diagnosis is gradually replaced by more accurate technologies such as 3-dimensional computed tomography (CT). These emerging technologies provide more detailed anatomic information and significantly improve the accuracy and efficiency of condylar fracture diagnosis, especially in the evaluation and surgical planning of complex fractures. The application of artificial intelligence in medical imaging is further analyzed, especially the successful cases of fracture detection and classification through deep learning models. Although AI technology has demonstrated great potential in condylar fracture diagnosis, it still faces challenges such as data quality, model interpretability, and clinical validation. This article evaluates the accuracy and practicality of AI in diagnosing mandibular fractures through a systematic review and meta-analysis of the existing literature. The results show that AI-assisted diagnosis has high prediction accuracy in detecting condylar fractures and significantly improves diagnostic efficiency. However, more multicenter studies are still needed to verify the application of AI in different clinical settings to promote its widespread application in maxillofacial surgery.

Progress in fully automated abdominal CT interpretation-an update over the past decade.

Batheja V, Summers R

pubmed logopapersJul 8 2025
This article reviews advancements in fully automated abdominal CT interpretation over the past decade, with a focus on automated image analysis techniques such as quantitative analysis, computer-aided detection, and disease classification. For each abdominal organ, we review segmentation techniques, assess clinical applications and performance, and explore methods for detecting/classifying associated pathologies. We also highlight cutting-edge AI developments, including foundation models, large language models, and multimodal image analysis. While challenges remain in integrating AI into radiology practice, recent progress underscores its growing potential to streamline workflows, reduce radiologist burnout, and enhance patient care.

Automated instance segmentation and registration of spinal vertebrae from CT-Scans with an improved 3D U-net neural network and corner point registration.

Hill J, Khokher MR, Nguyen C, Adcock M, Li R, Anderson S, Morrell T, Diprose T, Salvado O, Wang D, Tay GK

pubmed logopapersJul 8 2025
This paper presents a rapid and robust approach for 3D volumetric segmentation, labelling, and registration of human spinal vertebrae from CT scans using an optimised and improved 3D U-Net neural network architecture. The network is designed by incorporating residual and dense interconnections, followed by an extensive evaluation of different network setups by optimising the network components like activation functions, optimisers, and pooling operations. In addition, the network architecture is optimised for varying numbers of convolution layers per block and U-Net levels with fixed and cascading numbers of filters. For 3D virtual reality visualisation, the segmentation output of the improved 3D U-Net network is registered with the original scans through a corner point registration process. The registration takes into account the spatial coordinates of each segmented vertebra as a 3D volume and eight virtual fiducial markers to ensure alignment in all rotational planes. Trained on the VerSe'20 dataset, the proposed pipeline achieves a Dice score coefficient of 92.38% for vertebrae instance segmentation and a Hausdorff distance of 5.26 mm for vertebrae localisation on the VerSe'20 public test dataset, which outperforms many existing methods that participated in the VerSe'20 challenge. Integrated with Singular Health's MedVR software for virtual reality visualisation, the proposed solution has been deployed on standard edge-computing hardware in medical institutions. Depending on the scan size, the deployed solution takes between 90 and 210 s to label and segment vertebrae, including the cervical vertebrae. It is hoped that the acceleration of the segmentation and registration process will facilitate the easier preparation of future training datasets and benefit pre-surgical visualisation and planning.

Noise-inspired diffusion model for generalizable low-dose CT reconstruction.

Gao Q, Chen Z, Zeng D, Zhang J, Ma J, Shan H

pubmed logopapersJul 8 2025
The generalization of deep learning-based low-dose computed tomography (CT) reconstruction models to doses unseen in the training data is important and remains challenging. Previous efforts heavily rely on paired data to improve the generalization performance and robustness through collecting either diverse CT data for re-training or a few test data for fine-tuning. Recently, diffusion models have shown promising and generalizable performance in low-dose CT (LDCT) reconstruction, however, they may produce unrealistic structures due to the CT image noise deviating from Gaussian distribution and imprecise prior information from the guidance of noisy LDCT images. In this paper, we propose a noise-inspired diffusion model for generalizable LDCT reconstruction, termed NEED, which tailors diffusion models for noise characteristics of each domain. First, we propose a novel shifted Poisson diffusion model to denoise projection data, which aligns the diffusion process with the noise model in pre-log LDCT projections. Second, we devise a doubly guided diffusion model to refine reconstructed images, which leverages LDCT images and initial reconstructions to more accurately locate prior information and enhance reconstruction fidelity. By cascading these two diffusion models for dual-domain reconstruction, our NEED requires only normal-dose data for training and can be effectively extended to various unseen dose levels during testing via a time step matching strategy. Extensive qualitative, quantitative, and segmentation-based evaluations on two datasets demonstrate that our NEED consistently outperforms state-of-the-art methods in reconstruction and generalization performance. Source code is made available at https://github.com/qgao21/NEED.

A Deep Learning Model for Comprehensive Automated Bone Lesion Detection and Classification on Staging Computed Tomography Scans.

Simon BD, Harmon SA, Yang D, Belue MJ, Xu Z, Tetreault J, Pinto PA, Wood BJ, Citrin DE, Madan RA, Xu D, Choyke PL, Gulley JL, Turkbey B

pubmed logopapersJul 8 2025
A common site of metastases for a variety of cancers is the bone, which is challenging and time consuming to review and important for cancer staging. Here, we developed a deep learning approach for detection and classification of bone lesions on staging CTs. This study developed an nnUNet model using 402 patients' CTs, including prostate cancer patients with benign or malignant osteoblastic (blastic) bone lesions, and patients with benign or malignant osteolytic (lytic) bone lesions from various primary cancers. An expert radiologist contoured ground truth lesions, and the model was evaluated for detection on a lesion level. For classification performance, accuracy, sensitivity, specificity, and other metrics were calculated. The held-out test set consisted of 69 patients (32 with bone metastases). The AUC of AI-predicted burden of disease was calculated on a patient level. In the independent test set, 70% of ground truth lesions were detected (67% of malignant lesions and 72% of benign lesions). The model achieved accuracy of 85% in classifying lesions as malignant or benign (91% sensitivity and 81% specificity). Although AI identified false positives in several benign patients, the patient-level AUC was 0.82 using predicted disease burden proportion. Our lesion detection and classification AI model performs accurately and has the potential to correct physician errors. Further studies should investigate if the model can impact physician review in terms of detection rate, classification accuracy, and review time.

HGNet: High-Order Spatial Awareness Hypergraph and Multi-Scale Context Attention Network for Colorectal Polyp Detection

Xiaofang Liu, Lingling Sun, Xuqing Zhang, Yuannong Ye, Bin zhao

arxiv logopreprintJul 7 2025
Colorectal cancer (CRC) is closely linked to the malignant transformation of colorectal polyps, making early detection essential. However, current models struggle with detecting small lesions, accurately localizing boundaries, and providing interpretable decisions. To address these issues, we propose HGNet, which integrates High-Order Spatial Awareness Hypergraph and Multi-Scale Context Attention. Key innovations include: (1) an Efficient Multi-Scale Context Attention (EMCA) module to enhance lesion feature representation and boundary modeling; (2) the deployment of a spatial hypergraph convolution module before the detection head to capture higher-order spatial relationships between nodes; (3) the application of transfer learning to address the scarcity of medical image data; and (4) Eigen Class Activation Map (Eigen-CAM) for decision visualization. Experimental results show that HGNet achieves 94% accuracy, 90.6% recall, and 90% [email protected], significantly improving small lesion differentiation and clinical interpretability. The source code will be made publicly available upon publication of this paper.
Page 69 of 1421416 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.