Sort by:
Page 50 of 66652 results

Deep learning-based cone-beam CT motion compensation with single-view temporal resolution.

Maier J, Sawall S, Arheit M, Paysan P, Kachelrieß M

pubmed logopapersJun 4 2025
Cone-beam CT (CBCT) scans that are affected by motion often require motion compensation to reduce artifacts or to reconstruct 4D (3D+time) representations of the patient. To do so, most existing strategies rely on some sort of gating strategy that sorts the acquired projections into motion bins. Subsequently, these bins can be reconstructed individually before further post-processing may be applied to improve image quality. While this concept is useful for periodic motion patterns, it fails in case of non-periodic motion as observed, for example, in irregularly breathing patients. To address this issue and to increase temporal resolution, we propose the deep single angle-based motion compensation (SAMoCo). To avoid gating, and therefore its downsides, the deep SAMoCo trains a U-net-like network to predict displacement vector fields (DVFs) representing the motion that occurred between any two given time points of the scan. To do so, 4D clinical CT scans are used to simulate 4D CBCT scans as well as the corresponding ground truth DVFs that map between the different motion states of the scan. The network is then trained to predict these DVFs as a function of the respective projection views and an initial 3D reconstruction. Once the network is trained, an arbitrary motion state corresponding to a certain projection view of the scan can be recovered by estimating DVFs from any other state or view and by considering them during reconstruction. Applied to 4D CBCT simulations of breathing patients, the deep SAMoCo provides high-quality reconstructions for periodic and non-periodic motion. Here, the deviations with respect to the ground truth are less than 27 HU on average, while respiratory motion, or the diaphragm position, can be resolved with an accuracy of about 0.75 mm. Similar results were obtained for real measurements where a high correlation with external motion monitoring signals could be observed, even in patients with highly irregular respiration. The ability to estimate DVFs as a function of two arbitrary projection views and an initial 3D reconstruction makes deep SAMoCo applicable to arbitrary motion patterns with single-view temporal resolution. Therefore, the deep SAMoCo is particularly useful for cases with unsteady breathing, compensation of residual motion during a breath-hold scan, or scans with fast gantry rotation times in which the data acquisition only covers a very limited number of breathing cycles. Furthermore, not requiring gating signals may simplify the clinical workflow and reduces the time needed for patient preparation.

FPA-based weighted average ensemble of deep learning models for classification of lung cancer using CT scan images.

Zhou L, Jain A, Dubey AK, Singh SK, Gupta N, Panwar A, Kumar S, Althaqafi TA, Arya V, Alhalabi W, Gupta BB

pubmed logopapersJun 3 2025
Cancer is among the most dangerous diseases contributing to rising global mortality rates. Lung cancer, particularly adenocarcinoma, is one of the deadliest forms and severely impacts human life. Early diagnosis and appropriate treatment significantly increase patient survival rates. Computed Tomography (CT) is a preferred imaging modality for detecting lung cancer, as it offers detailed visualization of tumor structure and growth. With the advancement of deep learning, the automated identification of lung cancer from CT images has become increasingly effective. This study proposes a novel lung cancer detection framework using a Flower Pollination Algorithm (FPA)-based weighted ensemble of three high-performing pretrained Convolutional Neural Networks (CNNs): VGG16, ResNet101V2, and InceptionV3. Unlike traditional ensemble approaches that assign static or equal weights, the FPA adaptively optimizes the contribution of each CNN based on validation performance. This dynamic weighting significantly enhances diagnostic accuracy. The proposed FPA-based ensemble achieved an impressive accuracy of 98.2%, precision of 98.4%, recall of 98.6%, and an F1 score of 0.985 on the test dataset. In comparison, the best individual CNN (VGG16) achieved 94.6% accuracy, highlighting the superiority of the ensemble approach. These results confirm the model's effectiveness in accurate and reliable cancer diagnosis. The proposed study demonstrates the potential of deep learning and neural networks to transform cancer diagnosis, helping early detection and improving treatment outcomes.

Effect of contrast enhancement on diagnosis of interstitial lung abnormality in automatic quantitative CT measurement.

Choi J, Ahn Y, Kim Y, Noh HN, Do KH, Seo JB, Lee SM

pubmed logopapersJun 3 2025
To investigate the effect of contrast enhancement on the diagnosis of interstitial lung abnormalities (ILA) in automatic quantitative CT measurement in patients with paired pre- and post-contrast scans. Patients who underwent chest CT for thoracic surgery between April 2017 and December 2020 were retrospectively analyzed. ILA quantification was performed using deep learning-based automated software. Cases were categorized as ILA or non-ILA according to the Fleischner Society's definition, based on the quantification results or radiologist assessment (reference standard). Measurement variability, agreement, and diagnostic performance between the pre- and post-contrast scans were evaluated. In 1134 included patients, post-contrast scans quantified a slightly larger volume of nonfibrotic ILA (mean difference: -0.2%), due to increased ground-glass opacity and reticulation volumes (-0.2% and -0.1%), whereas the fibrotic ILA volume remained unchanged (0.0%). ILA was diagnosed in 15 (1.3%), 22 (1.9%), and 40 (3.5%) patients by pre- and post-contrast scans and radiologists, respectively. The agreement between the pre- and post-contrast scans was substantial (κ = 0.75), but both pre-contrast (κ = 0.46) and post-contrast (κ = 0.54) scans demonstrated moderate agreement with the radiologist. The sensitivity for ILA (32.5% vs. 42.5%, p = 0.221) and specificity for non-ILA (99.8% vs. 99.5%, p = 0.248) were comparable between pre- and post-contrast scans. Radiologist's reclassification for equivocal ILA due to unilateral abnormalities increased the sensitivity for ILA (67.5% and 75.0%, respectively) in both pre- and post-contrast scans. Applying automated quantification on post-contrast scans appears to be acceptable in terms of agreement and diagnostic performance; however, radiologists may need to improve sensitivity reclassifying equivocal ILA. Question The effect of contrast enhancement on the automated quantification of interstitial lung abnormality (ILA) remains unknown. Findings Automated quantification measured slightly larger ground-glass opacity and reticulation volumes on post-contrast scans than on pre-contrast scans; however, contrast enhancement did not affect the sensitivity for interstitial lung abnormality. Clinical relevance Applying automated quantification on post-contrast scans appears to be acceptable in terms of agreement and diagnostic performance.

Robust Detection of Out-of-Distribution Shifts in Chest X-ray Imaging.

Karimi F, Farnia F, Bae KT

pubmed logopapersJun 2 2025
This study addresses the critical challenge of detecting out-of-distribution (OOD) chest X-rays, where subtle view differences between lateral and frontal radiographs can lead to diagnostic errors. We develop a GAN-based framework that learns the inherent feature distribution of frontal views from the MIMIC-CXR dataset through latent space optimization and Kolmogorov-Smirnov statistical testing. Our approach generates similarity scores to reliably identify OOD cases, achieving exceptional performance with 100% precision, and 97.5% accuracy in detecting lateral views. The method demonstrates consistent reliability across operating conditions, maintaining accuracy above 92.5% and precision exceeding 93% under varying detection thresholds. These results provide both theoretical insights and practical solutions for OOD detection in medical imaging, demonstrating how GANs can establish feature representations for identifying distributional shifts. By significantly improving model reliability when encountering view-based anomalies, our framework enhances the clinical applicability of deep learning systems, ultimately contributing to improved diagnostic safety and patient outcomes.

SPCF-YOLO: An Efficient Feature Optimization Model for Real-Time Lung Nodule Detection.

Ren Y, Shi C, Zhu D, Zhou C

pubmed logopapersJun 2 2025
Accurate pulmonary nodule detection in CT imaging remains challenging due to fragmented feature integration in conventional deep learning models. This paper proposes SPCF-YOLO, a real-time detection framework that synergizes hierarchical feature fusion with anatomical context modeling. First, the space-to-depth convolution (SPDConv) module preserves fine-grained features in low-resolution images through spatial dimension reorganization. Second, the shared feature pyramid convolution (SFPConv) module is designed to dynamically extract multi-scale contextual information using multi-dilation-rate convolutional layers. Incorporating a small object detection layer aims to improve sensitivity to small nodules. This is achieved in combination with the improved pyramid squeeze attention (PSA) module and the improved contextual transformer (CoTB) module, which enhance global channel dependencies and reduce feature loss. The model achieves 82.8% mean average precision (mAP) and 82.9% F1 score on LUNA16 at 151 frames per second (representing improvements of 17.5% and 82.9% over YOLOv8 respectively), demonstrating real-time clinical viability. Cross-modality validation on SIIM-COVID-19 shows 1.5% improvement, confirming robust generalization.

ViTU-net: A hybrid deep learning model with patch-based LSB approach for medical image watermarking and authentication using a hybrid metaheuristic algorithm.

Nanammal V, Rajalakshmi S, Remya V, Ranjith S

pubmed logopapersJun 2 2025
In modern healthcare, telemedicine, health records, and AI-driven diagnostics depend on medical image watermarking to secure chest X-rays for pneumonia diagnosis, ensuring data integrity, confidentiality, and authenticity. A 2024 study found over 70 % of healthcare institutions faced medical image data breaches. Yet, current methods falter in imperceptibility, robustness against attacks, and deployment efficiency. ViTU-Net integrates cutting-edge techniques to address these multifaceted challenges in medical image security and analysis. The model's core component, the Vision Transformer (ViT) encoder, efficiently captures global dependencies and spatial information, while the U-Net decoder enhances image reconstruction, with both components leveraging the Adaptive Hierarchical Spatial Attention (AHSA) module for improved spatial processing. Additionally, the patch-based LSB embedding mechanism ensures focused embedding of reversible fragile watermarks within each patch of the segmented non-diagnostic region (RONI), guided dynamically by adaptive masks derived from the attention mechanism, minimizing impact on diagnostic accuracy while maximizing precision and ensuring optimal utilization of spatial information. The hybrid meta-heuristic optimization algorithm, TuniBee Fusion, dynamically optimizes watermarking parameters, striking a balance between exploration and exploitation, thereby enhancing watermarking efficiency and robustness. The incorporation of advanced cryptographic techniques, including SHA-512 hashing and AES encryption, fortifies the model's security, ensuring the authenticity and confidentiality of watermarked medical images. A PSNR value of 60.7 dB, along with an NCC value of 0.9999 and an SSIM value of 1.00, underscores its effectiveness in preserving image quality, security, and diagnostic accuracy. Robustness analysis against a spectrum of attacks validates ViTU-Net's resilience in real-world scenarios.

Efficient Medical Vision-Language Alignment Through Adapting Masked Vision Models.

Lian C, Zhou HY, Liang D, Qin J, Wang L

pubmed logopapersJun 2 2025
Medical vision-language alignment through cross-modal contrastive learning shows promising performance in image-text matching tasks, such as retrieval and zero-shot classification. However, conventional cross-modal contrastive learning (CLIP-based) methods suffer from suboptimal visual representation capabilities, which also limits their effectiveness in vision-language alignment. In contrast, although the models pretrained via multimodal masked modeling struggle with direct cross-modal matching, they excel in visual representation. To address this contradiction, we propose ALTA (ALign Through Adapting), an efficient medical vision-language alignment method that utilizes only about 8% of the trainable parameters and less than 1/5 of the computational consumption required for masked record modeling. ALTA achieves superior performance in vision-language matching tasks like retrieval and zero-shot classification by adapting the pretrained vision model from masked record modeling. Additionally, we integrate temporal-multiview radiograph inputs to enhance the information consistency between radiographs and their corresponding descriptions in reports, further improving the vision-language alignment. Experimental evaluations show that ALTA outperforms the best-performing counterpart by over 4% absolute points in text-to-image accuracy and approximately 6% absolute points in image-to-text retrieval accuracy. The adaptation of vision-language models during efficient alignment also promotes better vision and language understanding. Code is publicly available at https://github.com/DopamineLcy/ALTA.

Efficiency and Quality of Generative AI-Assisted Radiograph Reporting.

Huang J, Wittbrodt MT, Teague CN, Karl E, Galal G, Thompson M, Chapa A, Chiu ML, Herynk B, Linchangco R, Serhal A, Heller JA, Abboud SF, Etemadi M

pubmed logopapersJun 2 2025
Diagnostic imaging interpretation involves distilling multimodal clinical information into text form, a task well-suited to augmentation by generative artificial intelligence (AI). However, to our knowledge, impacts of AI-based draft radiological reporting remain unstudied in clinical settings. To prospectively evaluate the association of radiologist use of a workflow-integrated generative model capable of providing draft radiological reports for plain radiographs across a tertiary health care system with documentation efficiency, the clinical accuracy and textual quality of final radiologist reports, and the model's potential for detecting unexpected, clinically significant pneumothorax. This prospective cohort study was conducted from November 15, 2023, to April 24, 2024, at a tertiary care academic health system. The association between use of the generative model and radiologist documentation efficiency was evaluated for radiographs documented with model assistance compared with a baseline set of radiographs without model use, matched by study type (chest or nonchest). Peer review was performed on model-assisted interpretations. Flagging of pneumothorax requiring intervention was performed on radiographs prospectively. The primary outcomes were association of use of the generative model with radiologist documentation efficiency, assessed by difference in documentation time with and without model use using a linear mixed-effects model; for peer review of model-assisted reports, the difference in Likert-scale ratings using a cumulative-link mixed model; and for flagging pneumothorax requiring intervention, sensitivity and specificity. A total of 23 960 radiographs (11 980 each with and without model use) were used to analyze documentation efficiency. Interpretations with model assistance (mean [SE], 159.8 [27.0] seconds) were faster than the baseline set of those without (mean [SE], 189.2 [36.2] seconds) (P = .02), representing a 15.5% documentation efficiency increase. Peer review of 800 studies showed no difference in clinical accuracy (χ2 = 0.68; P = .41) or textual quality (χ2 = 3.62; P = .06) between model-assisted interpretations and nonmodel interpretations. Moreover, the model flagged studies containing a clinically significant, unexpected pneumothorax with a sensitivity of 72.7% and specificity of 99.9% among 97 651 studies screened. In this prospective cohort study of clinical use of a generative model for draft radiological reporting, model use was associated with improved radiologist documentation efficiency while maintaining clinical quality and demonstrated potential to detect studies containing a pneumothorax requiring immediate intervention. This study suggests the potential for radiologist and generative AI collaboration to improve clinical care delivery.

Multi-Organ metabolic profiling with [<sup>18</sup>F]F-FDG PET/CT predicts pathological response to neoadjuvant immunochemotherapy in resectable NSCLC.

Ma Q, Yang J, Guo X, Mu W, Tang Y, Li J, Hu S

pubmed logopapersJun 2 2025
To develop and validate a novel nomogram combining multi-organ PET metabolic metrics for major pathological response (MPR) prediction in resectable non-small cell lung cancer (rNSCLC) patients receiving neoadjuvant immunochemotherapy. This retrospective cohort included rNSCLC patients who underwent baseline [<sup>18</sup>F]F-FDG PET/CT prior to neoadjuvant immunochemotherapy at Xiangya Hospital from April 2020 to April 2024. Patients were randomly stratified into training (70%) and validation (30%) cohorts. Using deep learning-based automated segmentation, we quantified metabolic parameters (SUV<sub>mean</sub>, SUV<sub>max</sub>, SUV<sub>peak</sub>, MTV, TLG) and their ratio to liver metabolic parameters for primary tumors and nine key organs. Feature selection employed a tripartite approach: univariate analysis, LASSO regression, and random forest optimization. The final multivariable model was translated into a clinically interpretable nomogram, with validation assessing discrimination, calibration, and clinical utility. Among 115 patients (MPR rate: 63.5%, n = 73), five metabolic parameters emerged as predictive biomarkers for MPR: Spleen_SUV<sub>mean</sub>, Colon_SUV<sub>peak</sub>, Spine_TLG, Lesion_TLG, and Spleen-to-Liver SUV<sub>max</sub> ratio. The nomogram demonstrated consistent performance across cohorts (training AUC = 0.78 [95%CI 0.67-0.88]; validation AUC = 0.78 [95%CI 0.62-0.94]), with robust calibration and enhanced clinical net benefit on decision curve analysis. Compared to tumor-only parameters, the multi-organ model showed higher specificity (100% vs. 92%) and positive predictive value (100% vs. 90%) in the validation set, maintaining 76% overall accuracy. This first-reported multi-organ metabolic nomogram noninvasively predicts MPR in rNSCLC patients receiving neoadjuvant immunochemotherapy, outperforming conventional tumor-centric approaches. By quantifying systemic host-tumor metabolic crosstalk, this tool could help guide personalized therapeutic decisions while mitigating treatment-related risks, representing a paradigm shift towards precision immuno-oncology management.

Implementation costs and cost-effectiveness of ultraportable chest X-ray with artificial intelligence in active case finding for tuberculosis in Nigeria.

Garg T, John S, Abdulkarim S, Ahmed AD, Kirubi B, Rahman MT, Ubochioma E, Creswell J

pubmed logopapersJun 1 2025
Availability of ultraportable chest x-ray (CXR) and advancements in artificial intelligence (AI)-enabled CXR interpretation are promising developments in tuberculosis (TB) active case finding (ACF) but costing and cost-effectiveness analyses are limited. We provide implementation cost and cost-effectiveness estimates of different screening algorithms using symptoms, CXR and AI in Nigeria. People 15 years and older were screened for TB symptoms and offered a CXR with AI-enabled interpretation using qXR v3 (Qure.ai) at lung health camps. Sputum samples were tested on Xpert MTB/RIF for individuals reporting symptoms or with qXR abnormality scores ≥0.30. We conducted a retrospective costing using a combination of top-down and bottom-up approaches while utilizing itemized expense data from a health system perspective. We estimated costs in five screening scenarios: abnormality score ≥0.30 and ≥0.50; cough ≥ 2 weeks; any symptom; abnormality score ≥0.30 or any symptom. We calculated total implementation costs, cost per bacteriologically-confirmed case detected, and assessed cost-effectiveness using incremental cost-effectiveness ratio (ICER) as additional cost per additional case. Overall, 3205 people with presumptive TB were identified, 1021 were tested, and 85 people with bacteriologically-confirmed TB were detected. Abnormality ≥ 0.30 or any symptom (US$65704) had the highest costs while cough ≥ 2 weeks was the lowest (US$40740). The cost per case was US$1198 for cough ≥ 2 weeks, and lowest for any symptom (US$635). Compared to baseline strategy of cough ≥ 2 weeks, the ICER for any symptom was US$191 per additional case detected and US$ 2096 for Abnormality ≥0.30 OR any symptom algorithm. Using CXR and AI had lower cost per case detected than any symptom screening criteria when asymptomatic TB was higher than 30% of all bacteriologically-confirmed TB detected. Compared to traditional symptom screening, using CXR and AI in combination with symptoms detects more cases at lower cost per case detected and is cost-effective. TB programs should explore adoption of CXR and AI for screening in ACF.
Page 50 of 66652 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.