Sort by:
Page 212 of 3623611 results

Integrating CT radiomics and clinical features using machine learning to predict post-COVID pulmonary fibrosis.

Zhao Q, Li Y, Zhao C, Dong R, Tian J, Zhang Z, Huang L, Huang J, Yan J, Yang Z, Ruan J, Wang P, Yu L, Qu J, Zhou M

pubmed logopapersJul 2 2025
The lack of reliable biomarkers for the early detection and risk stratification of post-COVID-19 pulmonary fibrosis (PCPF) underscores the urgency advanced predictive tools. This study aimed to develop a machine learning-based predictive model integrating quantitative CT (qCT) radiomics and clinical features to assess the risk of lung fibrosis in COVID-19 patients. A total of 204 patients with confirmed COVID-19 pneumonia were included in the study. Of these, 93 patients were assigned to the development cohort (74 for training and 19 for internal validation), while 111 patients from three independent hospitals constituted the external validation cohort. Chest CT images were analyzed using qCT software. Clinical data and laboratory parameters were obtained from electronic health records. Least absolute shrinkage and selection operator (LASSO) regression with 5-fold cross-validation was used to select the most predictive features. Twelve machine learning algorithms were independently trained. Their performances were evaluated by receiver operating characteristic (ROC) curves, area under the curve (AUC) values, sensitivity, and specificity. Seventy-eight features were extracted and reduced to ten features for model development. These included two qCT radiomics signatures: (1) whole lung_reticulation (%) interstitial lung disease (ILD) texture analysis, (2) interstitial lung abnormality (ILA)_Num of lung zones ≥ 5%_whole lung_ILA. Among 12 machine learning algorithms evaluated, the support vector machine (SVM) model demonstrated the best predictive performance, with AUCs of 0.836 (95% CI: 0.830-0.842) in the training cohort, 0.796 (95% CI: 0.777-0.816) in the internal validation cohort, and 0.797 (95% CI: 0.691-0.873) in the external validation cohort. The integration of CT radiomics, clinical and laboratory variables using machine learning provides a robust tool for predicting pulmonary fibrosis progression in COVID-19 patients, facilitating early risk assessment and intervention.

Clinical value of the 70-kVp ultra-low-dose CT pulmonary angiography with deep learning image reconstruction.

Zhang Y, Wang L, Yuan D, Qi K, Zhang M, Zhang W, Gao J, Liu J

pubmed logopapersJul 2 2025
This study aims to assess the feasibility of "double-low," low radiation dosage and low contrast media dosage, CT pulmonary angiography (CTPA) based on deep-learning image reconstruction (DLIR) algorithms. One hundred consecutive patients (41 females; average age 60.9 years, range 18-90) were prospectively scanned on multi-detector CT systems. Fifty patients in the conventional-dose group (CD group) underwent CTPA with 100 kV protocol using the traditional iterative reconstruction algorithm, and 50 patients in the low-dose group (LD group) underwent CTPA with a 70 kVp DLIR protocol. Radiation and contrast agent doses were recorded and compared between groups. Objective parameters were measured and compared. Two radiologists evaluated images for overall image quality, artifacts, and image contrast separately on a 5-point scale. The furthest visible branches were compared between groups. Compared to the control group, the study group reduced the dose-length product by 80.3% (p < 0.01) and the contrast media dose by 33.3%. CT values, SD values, signal-to-noise ratio (SNR), and contrast-to-noise ratio (CNR) showed no statistically significant differences (all p > 0.05) between the LD and CD groups. The overall image quality scores were comparable between the LD and CD groups (p > 0.05), with good in-reader agreement (k = 0.75). More peripheral pulmonary vessels could be assessed in the LD group compared with the CD group. 70 kVp combined with DLIR reconstruction for CTPA can further reduce radiation and contrast agent dose while maintaining image quality and increasing the visibility on the pulmonary artery distal branches. Question Elevated radiation exposure and substantial doses of contrast media during CT pulmonary angiography (CTPA) augment patient risks. Findings The "double-low" CT pulmonary angiography protocol can diminish radiation doses by 80.3% and minimize contrast doses by one-third while maintaining image quality. Clinical relevance With deep learning algorithms, we confirmed that CTPA images maintained excellent quality despite reduced radiation and contrast dosages, helping to reduce radiation exposure and kidney burden on patients. The "double-low" CTPA protocol, complemented by deep learning image reconstruction, prioritizes patient safety.

A deep learning model for early diagnosis of alzheimer's disease combined with 3D CNN and video Swin transformer.

Zhou J, Wei Y, Li X, Zhou W, Tao R, Hua Y, Liu H

pubmed logopapersJul 2 2025
Alzheimer's disease (AD) constitutes a neurodegenerative disorder predominantly observed in the geriatric population. If AD can be diagnosed early, both in terms of prevention and treatment, it is very beneficial to patients. Therefore, our team proposed a novel deep learning model named 3D-CNN-VSwinFormer. The model consists of two components: the first part is a 3D CNN equipped with a 3D Convolutional Block Attention Module (3D CBAM) module, and the second part involves a fine-tuned Video Swin Transformer. Our investigation extracts features from subject-level 3D Magnetic resonance imaging (MRI) data, retaining only a single 3D MRI image per participant. This method circumvents data leakage and addresses the issue of 2D slices failing to capture global spatial information. We utilized the ADNI dataset to validate our proposed model. In differentiating between AD patients and cognitively normal (CN) individuals, we achieved accuracy and AUC values of 92.92% and 0.9660, respectively. Compared to other studies on AD and CN recognition, our model yielded superior results, enhancing the efficiency of AD diagnosis.

Retrieval-augmented generation elevates local LLM quality in radiology contrast media consultation.

Wada A, Tanaka Y, Nishizawa M, Yamamoto A, Akashi T, Hagiwara A, Hayakawa Y, Kikuta J, Shimoji K, Sano K, Kamagata K, Nakanishi A, Aoki S

pubmed logopapersJul 2 2025
Large language models (LLMs) demonstrate significant potential in healthcare applications, but clinical deployment is limited by privacy concerns and insufficient medical domain training. This study investigated whether retrieval-augmented generation (RAG) can improve locally deployable LLM for radiology contrast media consultation. In 100 synthetic iodinated contrast media consultations we compared Llama 3.2-11B (baseline and RAG) with three cloud-based models-GPT-4o mini, Gemini 2.0 Flash and Claude 3.5 Haiku. A blinded radiologist ranked the five replies per case, and three LLM-based judges scored accuracy, safety, structure, tone, applicability and latency. Under controlled conditions, RAG eliminated hallucinations (0% vs 8%; χ²₍Yates₎ = 6.38, p = 0.012) and improved mean rank by 1.3 (Z = -4.82, p < 0.001), though performance gaps with cloud models persist. The RAG-enhanced model remained faster (2.6 s vs 4.9-7.3 s) while the LLM-based judges preferred it over GPT-4o mini, though the radiologist ranked GPT-4o mini higher. RAG thus provides meaningful improvements for local clinical LLMs while maintaining the privacy benefits of on-premise deployment.

Intelligent diagnosis model for chest X-ray images diseases based on convolutional neural network.

Yang S, Wu Y

pubmed logopapersJul 2 2025
To address misdiagnosis caused by feature coupling in multi-label medical image classification, this study introduces a chest X-ray pathology reasoning method. It combines hierarchical attention convolutional networks with a multi-label decoupling loss function. This method aims to enhance the precise identification of complex lesions. It dynamically captures multi-scale lesion morphological features and integrates lung field partitioning with lesion localization through a dual-path attention mechanism, thereby improving clinical disease prediction accuracy. An adaptive dilated convolution module with 3 × 3 deformable kernels dynamically captures multi-scale lesion features. A channel-space dual-path attention mechanism enables precise feature selection for lung field partitioning and lesion localization. Cross-scale skip connections fuse shallow texture and deep semantic information, enhancing microlesion detection. A KL divergence-constrained contrastive loss function decouples 14 pathological feature representations via orthogonal regularization, effectively resolving multi-label coupling. Experiments on ChestX-ray14 show a weighted F1-score of 0.97, Hamming Loss of 0.086, and AUC values exceeding 0.94 for all pathologies. This study provides a reliable tool for multi-disease collaborative diagnosis.

Performance of two different artificial intelligence models in dental implant planning among four different implant planning software: a comparative study.

Roongruangsilp P, Narkbuakaew W, Khongkhunthian P

pubmed logopapersJul 2 2025
The integration of artificial intelligence (AI) in dental implant planning has emerged as a transformative approach to enhance diagnostic accuracy and efficiency. This study aimed to evaluate the performance of two object detection models, Faster R-CNN and YOLOv7 in analyzing cross-sectional and panoramic images derived from DICOM files processed by four distinct dental imaging software platforms. The dataset consisted of 332 implant position images derived from DICOM files of 184 CBCT scans. Three hundred images were processed using DentiPlan Pro 3.7 software (NECTEC, NSTDA, Thailand) for the development of Faster R-CNN and YOLOv7 models for dental implant planning. For model testing, 32 additional implant position images, which were not included in the training set, were processed using four different software programs: DentiPlan Pro 3.7, DentiPlan Pro Plus 5.0 (DTP; NECTEC, NSTDA, Thailand), Implastation (ProDigiDent USA, USA), and Romexis 6.0 (Planmeca, Finland). The performance of the models was evaluated using detection rate, accuracy, precision, recall, F1 score, and the Jaccard Index (JI). Faster R-CNN achieved superior accuracy across imaging modalities, while YOLOv7 demonstrated higher detection rates, albeit with lower precision. The impact of image rendering algorithms on model performance underscores the need for standardized preprocessing pipelines. Although Faster R-CNN demonstrated relatively higher performance metrics, statistical analysis revealed no significant differences between the models (p-value > 0.05). This study emphasizes the potential of AI-driven solutions in dental implant planning and advocates the need for further research in this area. The absence of statistically significant differences between Faster R-CNN and YOLOv7 suggests that both models can be effectively utilized, depending on the specific requirements for accuracy or detection. Furthermore, the variations in imaging rendering algorithms across different software platforms significantly influenced the model outcomes. AI models for DICOM analysis should rely on standardized image rendering to ensure consistent performance.

PanTS: The Pancreatic Tumor Segmentation Dataset

Wenxuan Li, Xinze Zhou, Qi Chen, Tianyu Lin, Pedro R. A. S. Bassi, Szymon Plotka, Jaroslaw B. Cwikla, Xiaoxi Chen, Chen Ye, Zheren Zhu, Kai Ding, Heng Li, Kang Wang, Yang Yang, Yucheng Tang, Daguang Xu, Alan L. Yuille, Zongwei Zhou

arxiv logopreprintJul 2 2025
PanTS is a large-scale, multi-institutional dataset curated to advance research in pancreatic CT analysis. It contains 36,390 CT scans from 145 medical centers, with expert-validated, voxel-wise annotations of over 993,000 anatomical structures, covering pancreatic tumors, pancreas head, body, and tail, and 24 surrounding anatomical structures such as vascular/skeletal structures and abdominal/thoracic organs. Each scan includes metadata such as patient age, sex, diagnosis, contrast phase, in-plane spacing, slice thickness, etc. AI models trained on PanTS achieve significantly better performance in pancreatic tumor detection, localization, and segmentation compared to those trained on existing public datasets. Our analysis indicates that these gains are directly attributable to the 16x larger-scale tumor annotations and indirectly supported by the 24 additional surrounding anatomical structures. As the largest and most comprehensive resource of its kind, PanTS offers a new benchmark for developing and evaluating AI models in pancreatic CT analysis.

Predicting progression-free survival in sarcoma using MRI-based automatic segmentation models and radiomics nomograms: a preliminary multicenter study.

Zhu N, Niu F, Fan S, Meng X, Hu Y, Han J, Wang Z

pubmed logopapersJul 1 2025
Some sarcomas are highly malignant, associated with high recurrence despite treatment. This multicenter study aimed to develop and validate a radiomics signature to estimate sarcoma progression-free survival (PFS). The study retrospectively enrolled 202 consecutive patients with pathologically diagnosed sarcoma, who had pre-treatment axial fat-suppressed T2-weighted images (FS-T2WI), and included them in the ROI-Net model for training. Among them, 120 patients were included in the radiomics analysis, all of whom had pre-treatment axial T1-weighted and transverse FS-T2WI images, and were randomly divided into a development group (n = 96) and a validation group (n = 24). In the development cohort, Least Absolute Shrinkage and Selection Operator (LASSO) Cox regression was used to develop the radiomics features for PFS prediction. By combining significant clinical features with radiomics features, a nomogram was constructed using Cox regression. The proposed ROI-Net framework achieved a Dice coefficient of 0.820 (0.791-0.848). The radiomics signature based on 21 features could distinguish high-risk patients with poor PFS. Univariate Cox analysis revealed that peritumoral edema, metastases, and the radiomics score were associated with poor PFS and were included in the construction of the nomogram. The Radiomics-T1WI-Clinical model exhibited the best performance, with AUC values of 0.947, 0.907, and 0.924 at 300 days, 600 days, and 900 days, respectively. The proposed ROI-Net framework demonstrated high consistency between its segmentation results and expert annotations. The radiomics features and the combined nomogram have the potential to aid in predicting PFS for patients with sarcoma.

MedGround-R1: Advancing Medical Image Grounding via Spatial-Semantic Rewarded Group Relative Policy Optimization

Huihui Xu, Yuanpeng Nie, Hualiang Wang, Ying Chen, Wei Li, Junzhi Ning, Lihao Liu, Hongqiu Wang, Lei Zhu, Jiyao Liu, Xiaomeng Li, Junjun He

arxiv logopreprintJul 1 2025
Medical Image Grounding (MIG), which involves localizing specific regions in medical images based on textual descriptions, requires models to not only perceive regions but also deduce spatial relationships of these regions. Existing Vision-Language Models (VLMs) for MIG often rely on Supervised Fine-Tuning (SFT) with large amounts of Chain-of-Thought (CoT) reasoning annotations, which are expensive and time-consuming to acquire. Recently, DeepSeek-R1 demonstrated that Large Language Models (LLMs) can acquire reasoning abilities through Group Relative Policy Optimization (GRPO) without requiring CoT annotations. In this paper, we adapt the GRPO reinforcement learning framework to VLMs for Medical Image Grounding. We propose the Spatial-Semantic Rewarded Group Relative Policy Optimization to train the model without CoT reasoning annotations. Specifically, we introduce Spatial-Semantic Rewards, which combine spatial accuracy reward and semantic consistency reward to provide nuanced feedback for both spatially positive and negative completions. Additionally, we propose to use the Chain-of-Box template, which integrates visual information of referring bounding boxes into the <think> reasoning process, enabling the model to explicitly reason about spatial regions during intermediate steps. Experiments on three datasets MS-CXR, ChestX-ray8, and M3D-RefSeg demonstrate that our method achieves state-of-the-art performance in Medical Image Grounding. Ablation studies further validate the effectiveness of each component in our approach. Code, checkpoints, and datasets are available at https://github.com/bio-mlhui/MedGround-R1

Unsupervised Cardiac Video Translation Via Motion Feature Guided Diffusion Model

Swakshar Deb, Nian Wu, Frederick H. Epstein, Miaomiao Zhang

arxiv logopreprintJul 1 2025
This paper presents a novel motion feature guided diffusion model for unpaired video-to-video translation (MFD-V2V), designed to synthesize dynamic, high-contrast cine cardiac magnetic resonance (CMR) from lower-contrast, artifact-prone displacement encoding with stimulated echoes (DENSE) CMR sequences. To achieve this, we first introduce a Latent Temporal Multi-Attention (LTMA) registration network that effectively learns more accurate and consistent cardiac motions from cine CMR image videos. A multi-level motion feature guided diffusion model, equipped with a specialized Spatio-Temporal Motion Encoder (STME) to extract fine-grained motion conditioning, is then developed to improve synthesis quality and fidelity. We evaluate our method, MFD-V2V, on a comprehensive cardiac dataset, demonstrating superior performance over the state-of-the-art in both quantitative metrics and qualitative assessments. Furthermore, we show the benefits of our synthesized cine CMRs improving downstream clinical and analytical tasks, underscoring the broader impact of our approach. Our code is publicly available at https://github.com/SwaksharDeb/MFD-V2V.
Page 212 of 3623611 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.