Sort by:
Page 18 of 66652 results

Deep learning for detection and diagnosis of intrathoracic lymphadenopathy from endobronchial ultrasound multimodal videos: A multi-center study.

Chen J, Li J, Zhang C, Zhi X, Wang L, Zhang Q, Yu P, Tang F, Zha X, Wang L, Dai W, Xiong H, Sun J

pubmed logopapersAug 19 2025
Convex probe endobronchial ultrasound (CP-EBUS) ultrasonographic features are important for diagnosing intrathoracic lymphadenopathy. Conventional methods for CP-EBUS imaging analysis rely heavily on physician expertise. To overcome this obstacle, we propose a deep learning-aided diagnostic system (AI-CEMA) to automatically select representative images, identify lymph nodes (LNs), and differentiate benign from malignant LNs based on CP-EBUS multimodal videos. AI-CEMA is first trained using 1,006 LNs from a single center and validated with a retrospective study and then demonstrated with a prospective multi-center study on 267 LNs. AI-CEMA achieves an area under the curve (AUC) of 0.8490 (95% confidence interval [CI], 0.8000-0.8980), which is comparable to experienced experts (AUC, 0.7847 [95% CI, 0.7320-0.8373]; p = 0.080). Additionally, AI-CEMA is successfully transferred to a pulmonary lesion diagnosis task and obtains a commendable AUC of 0.8192 (95% CI, 0.7676-0.8709). In conclusion, AI-CEMA shows great potential in clinical diagnosis of intrathoracic lymphadenopathy and pulmonary lesions by providing automated, noninvasive, and expert-level diagnosis.

One-Year Change in Quantitative Computed Tomography Is Associated with Meaningful Outcomes in Fibrotic Lung Disease.

Koslow M, Baraghoshi D, Swigris JJ, Brown KK, Fernández Pérez ER, Huie TJ, Keith RC, Mohning MP, Solomon JJ, Yunt ZX, Manco G, Lynch DA, Humphries SM

pubmed logopapersAug 18 2025
Whether change in fibrosis on high-resolution CT (HRCT) is associated with near- and longer-term outcomes in patients with fibrotic interstitial lung disease (fILD) remains unclear. We evaluated the association between 1-year change in quantitative fibrosis scores (DTA) and subsequent forced vital capacity (FVC) and survival in patients with fILD. The primary cohort included fILD patients evaluated from 2017-2020 with baseline and 1-year follow-up HRCT and FVC. Associations between DTA change and subsequent FVC were assessed using linear mixed models. Transplant-free survival was assessed using Cox proportional hazards models. The Pulmonary Fibrosis Foundation (PFF-PR) Patient Registry served as the validation cohort. The primary cohort included 407 patients (median [IQR] age, 70.5 [64.8, 75.9] years; 214 male). One-year increase in DTA was associated with subsequent FVC decline and transplant-free survival. The largest effect on FVC was observed in patients with low baseline DTA scores in whom a 5% increase in DTA over 1 year was associated with a change in FVC of -91 mL/year [95% CI: -117, -65] (vs stable DTA: -49 mL/year [95% CI: -69, -29]; p=0.0002). The hazard ratio for transplant-free survival for a 5% increase in DTA over one year was 1.45 [95% CI: 1.25, 1.68]. Findings were confirmed in the validation cohort. One-year change in DTA score is associated with future disease trajectory and transplant-free survival in patients with fILD. DTA could be a useful trial endpoint, cohort enrichment tool, and metric to incorporate into clinical care.

PAINT: Prior-aided Alternate Iterative NeTwork for Ultra-low-dose CT Imaging Using Diffusion Model-restored Sinogram.

Chen K, Zhang W, Deng Z, Zhou Y, Zhao J

pubmed logopapersAug 18 2025
Obtaining multiple CT scans from the same patient is required in many clinical scenarios, such as lung nodule screening and image-guided radiation therapy. Repeated scans would expose patients to higher radiation dose and increase the risk of cancer. In this study, we aim to achieve ultra-low-dose imaging for subsequent scans by collecting extremely undersampled sinogram via regional few-view scanning, and preserve image quality utilizing the preceding fullsampled scan as prior. To fully exploit prior information, we propose a two-stage framework consisting of diffusion model-based sinogram restoration and deep learning-based unrolled iterative reconstruction. Specifically, the undersampled sinogram is first restored by a conditional diffusion model with sinogram-domain prior guidance. Then, we formulate the undersampled data reconstruction problem as an optimization problem combining fidelity terms for both undersampled and restored data, along with a regularization term based on image-domain prior. Next, we propose Prior-aided Alternate Iterative NeTwork (PAINT) to solve the optimization problem. PAINT alternately updates the undersampled or restored data fidelity term, and unrolls the iterations to integrate neural network-based prior regularization. In the case of 112 mm field of view in simulated data experiments, our proposed framework achieved superior performance in terms of CT value accuracy and image details preservation. Clinical data experiments also demonstrated that our proposed framework outperformed the comparison methods in artifact reduction and structure recovery.

Development of a lung perfusion automated quantitative model based on dual-energy CT pulmonary angiography in patients with chronic pulmonary thromboembolism.

Xi L, Wang J, Liu A, Ni Y, Du J, Huang Q, Li Y, Wen J, Wang H, Zhang S, Zhang Y, Zhang Z, Wang D, Xie W, Gao Q, Cheng Y, Zhai Z, Liu M

pubmed logopapersAug 18 2025
To develop PerAIDE, an AI-driven system for automated analysis of pulmonary perfusion blood volume (PBV) using dual-energy computed tomography pulmonary angiography (DE-CTPA) in patients with chronic pulmonary thromboembolism (CPE). In this prospective observational study, 32 patients with chronic thromboembolic pulmonary disease (CTEPD) and 151 patients with chronic thromboembolic pulmonary hypertension (CTEPH) were enrolled between January 2022 and July 2024. PerAIDE was developed to automatically quantify three distinct perfusion patterns-normal, reduced, and defective-on DE-CTPA images. Two radiologists independently assessed PBV scores. Follow-up imaging was conducted 3 months after balloon pulmonary angioplasty (BPA). PerAIDE demonstrated high agreement with the radiologists (intraclass correlation coefficient = 0.778) and reduced analysis time significantly (31 ± 3 s vs. 15 ± 4 min, p < 0.001). CTEPH patients had greater perfusion defects than CTEPD (0.35 vs. 0.29, p < 0.001), while reduced perfusion was more prevalent in CTEPD (0.36 vs. 0.30, p < 0.001). Perfusion defects correlated positively with pulmonary vascular resistance (ρ = 0.534) and mean pulmonary artery pressure (ρ = 0.482), and negatively with oxygenation index (ρ = -0.441). PerAIDE effectively differentiated CTEPH from CTEPD (AUC = 0.809, 95% CI: 0.745-0.863). At the 3-month post-BPA, a significant reduction in perfusion defects was observed (0.36 vs. 0.33, p < 0.01). CTEPD and CTEPH exhibit distinct perfusion phenotypes on DE-CTPA. PerAIDE reliably quantifies perfusion abnormalities and correlates strongly with clinical and hemodynamic markers of CPE severity. ClinicalTrials.gov, NCT06526468. Registered 28 August 2024- Retrospectively registered, https://clinicaltrials.gov/study/NCT06526468?cond=NCT06526468&rank=1 . PerAIDE is a dual-energy computed tomography pulmonary angiography (DE-CTPA) AI-driven system that rapidly and accurately assesses perfusion blood volume in patients with chronic pulmonary thromboembolism, effectively distinguishing between CTEPD and CTEPH phenotypes and correlating with disease severity and therapeutic response. Right heart catheterization for definitive diagnosis of chronic pulmonary thromboembolism (CPE) is invasive. PerAIDE-based perfusion defects correlated with disease severity to aid CPE-treatment assessment. CTEPH demonstrates severe perfusion defects, while CTEPD displays predominantly reduced perfusion. PerAIDE employs a U-Net-based adaptive threshold method, which achieves alignment with and faster processing relative to manual evaluation.

Applications of Small Language Models in Medical Imaging Classification with a Focus on Prompt Strategies

Yiting Wang, Ziwei Wang, Jiachen Zhong, Di Zhu, Weiyi Li

arxiv logopreprintAug 18 2025
Large language models (LLMs) have shown remarkable capabilities in natural language processing and multi-modal understanding. However, their high computational cost, limited accessibility, and data privacy concerns hinder their adoption in resource-constrained healthcare environments. This study investigates the performance of small language models (SLMs) in a medical imaging classification task, comparing different models and prompt designs to identify the optimal combination for accuracy and usability. Using the NIH Chest X-ray dataset, we evaluate multiple SLMs on the task of classifying chest X-ray positions (anteroposterior [AP] vs. posteroanterior [PA]) under three prompt strategies: baseline instruction, incremental summary prompts, and correction-based reflective prompts. Our results show that certain SLMs achieve competitive accuracy with well-crafted prompts, suggesting that prompt engineering can substantially enhance SLM performance in healthcare applications without requiring deep AI expertise from end users.

Advancing deep learning-based segmentation for multiple lung cancer lesions in real-world multicenter CT scans.

Rafael-Palou X, Jimenez-Pastor A, Martí-Bonmatí L, Muñoz-Nuñez CF, Laudazi M, Alberich-Bayarri Á

pubmed logopapersAug 18 2025
Accurate segmentation of lung cancer lesions in computed tomography (CT) is essential for precise diagnosis, personalized therapy planning, and treatment response assessment. While automatic segmentation of the primary lung lesion has been widely studied, the ability to segment multiple lesions per patient remains underexplored. In this study, we address this gap by introducing a novel, automated approach for multi-instance segmentation of lung cancer lesions, leveraging a heterogeneous cohort with real-world multicenter data. We analyzed 1,081 retrospectively collected CT scans with 5,322 annotated lesions (4.92 ± 13.05 lesions per scan). The cohort was stratified into training (n = 868) and testing (n = 213) subsets. We developed an automated three-step pipeline, including thoracic bounding box extraction, multi-instance lesion segmentation, and false positive reduction via a novel multiscale cascade classifier to filter spurious and non-lesion candidates. On the independent test set, our method achieved a Dice similarity coefficient of 76% for segmentation and a lesion detection sensitivity of 85%. When evaluated on an external dataset of 188 real-world cases, it achieved a Dice similarity coefficient of 73%, and a lesion detection sensitivity of 85%. Our approach accurately detected and segmented multiple lung cancer lesions per patient on CT scans, demonstrating robustness across an independent test set and an external real-world dataset. AI-driven segmentation comprehensively captures lesion burden, enhancing lung cancer assessment and disease monitoring KEY POINTS: Automatic multi-instance lung cancer lesion segmentation is underexplored yet crucial for disease assessment. Developed a deep learning-based segmentation pipeline trained on multi-center real-world data, which reached 85% sensitivity at external validation. Thoracic bounding box and false positive reduction techniques improved the pipeline's segmentation performance.

X-Ray-CoT: Interpretable Chest X-ray Diagnosis with Vision-Language Models via Chain-of-Thought Reasoning

Chee Ng, Liliang Sun, Shaoqing Tang

arxiv logopreprintAug 17 2025
Chest X-ray imaging is crucial for diagnosing pulmonary and cardiac diseases, yet its interpretation demands extensive clinical experience and suffers from inter-observer variability. While deep learning models offer high diagnostic accuracy, their black-box nature hinders clinical adoption in high-stakes medical settings. To address this, we propose X-Ray-CoT (Chest X-Ray Chain-of-Thought), a novel framework leveraging Vision-Language Large Models (LVLMs) for intelligent chest X-ray diagnosis and interpretable report generation. X-Ray-CoT simulates human radiologists' "chain-of-thought" by first extracting multi-modal features and visual concepts, then employing an LLM-based component with a structured Chain-of-Thought prompting strategy to reason and produce detailed natural language diagnostic reports. Evaluated on the CORDA dataset, X-Ray-CoT achieves competitive quantitative performance, with a Balanced Accuracy of 80.52% and F1 score of 78.65% for disease diagnosis, slightly surpassing existing black-box models. Crucially, it uniquely generates high-quality, explainable reports, as validated by preliminary human evaluations. Our ablation studies confirm the integral role of each proposed component, highlighting the necessity of multi-modal fusion and CoT reasoning for robust and transparent medical AI. This work represents a significant step towards trustworthy and clinically actionable AI systems in medical imaging.

VariMix: A variety-guided data mixing framework for explainable medical image classifications.

Xiong X, Sun Y, Liu X, Ke W, Lam CT, Gao Q, Tong T, Li S, Tan T

pubmed logopapersAug 16 2025
Modern deep neural networks are highly over-parameterized, necessitating the use of data augmentation techniques to prevent overfitting and enhance generalization. Generative adversarial networks (GANs) are popular for synthesizing visually realistic images. However, these synthetic images often lack diversity and may have ambiguous class labels. Recent data mixing strategies address some of these issues by mixing image labels based on salient regions. Since the main diagnostic information is not always contained within the salient regions, we aim to address the resulting label mismatches in medical image classifications. We propose a variety-guided data mixing framework (VariMix), which exploits an absolute difference map (ADM) to address the label mismatch problems of mixed medical images. VariMix generates ADM using the image-to-image (I2I) GAN across multiple classes and allows for bidirectional mixing operations between the training samples. The proposed VariMix achieves the highest accuracy of 99.30% and 94.60% with a SwinT V2 classifier on a Chest X-ray (CXR) dataset and a Retinal dataset, respectively. It also achieves the highest accuracy of 87.73%, 99.28%, 95.13%, and 95.81% with a ConvNeXt classifier on a Breast Ultrasound (US) dataset, a CXR dataset, a Retinal dataset, and a Maternal-Fetal US dataset, respectively. Furthermore, the medical expert evaluation on generated images shows the great potential of our proposed I2I GAN in improving the accuracy of medical image classifications. Extensive experiments demonstrate the superiority of VariMix compared with the existing GAN- and Mixup-based methods on four public datasets using Swin Transformer V2 and ConvNeXt architectures. Furthermore, by projecting the source image to the hyperplanes of the classifiers, the proposed I2I GAN can generate hyperplane difference maps between the source image and the hyperplane image, demonstrating its ability to interpret medical image classifications. The source code is provided in https://github.com/yXiangXiong/VariMix.

Prospective validation of an artificial intelligence assessment in a cohort of applicants seeking financial compensation for asbestosis (PROSBEST).

Smesseim I, Lipman KBWG, Trebeschi S, Stuiver MM, Tissier R, Burgers JA, de Gooijer CJ

pubmed logopapersAug 15 2025
Asbestosis, a rare pneumoconiosis marked by diffuse pulmonary fibrosis, arises from prolonged asbestos exposure. Its diagnosis, guided by the Helsinki criteria, relies on exposure history, clinical findings, radiology, and lung function. However, interobserver variability complicates diagnoses and financial compensation. This study prospectively validated the sensitivity of an AI-driven assessment for asbestosis compensation in the Netherlands. Secondary objectives included evaluating specificity, accuracy, predictive values, area under the curve of the receiver operating characteristic (ROC-AUC), area under the precision-recall curve (PR-AUC), and interobserver variability. Between September 2020 and July 2022, 92 adult compensation applicants were assessed using both AI models and pulmonologists' reviews based on Dutch Health Council criteria. The AI model assigned an asbestosis probability score: negative (< 35), uncertain (35-66), or positive (≥ 66). Uncertain cases underwent additional reviews for a final determination. The AI assessment demonstrated sensitivity of 0.86 (95% confidence interval: 0.77-0.95), specificity of 0.85 (0.76-0.97), accuracy of 0.87 (0.79-0.93), ROC-AUC of 0.92 (0.84-0.97), and PR-AUC of 0.95 (0.89-0.99). Despite strong metrics, the sensitivity target of 98% was unmet. Pulmonologist reviews showed moderate to substantial interobserver variability. The AI-driven approach demonstrated robust accuracy but insufficient sensitivity for validation. Addressing interobserver variability and incorporating objective fibrosis measurements could enhance future reliability in clinical and compensation settings. The AI-driven assessment for financial compensation of asbestosis showed adequate accuracy but did not meet the required sensitivity for validation. We prospectively assessed the sensitivity of an AI-driven assessment procedure for financial compensation of asbestosis. The AI-driven asbestosis probability score underperformed across all metrics compared to internal testing. The AI-driven assessment procedure achieved a sensitivity of 0.86 (95% confidence interval: 0.77-0.95). It did not meet the predefined sensitivity target.

Spatio-temporal deep learning with temporal attention for indeterminate lung nodule classification.

Farina B, Carbajo Benito R, Montalvo-García D, Bermejo-Peláez D, Maceiras LS, Ledesma-Carbayo MJ

pubmed logopapersAug 15 2025
Lung cancer is the leading cause of cancer-related death worldwide. Deep learning-based computer-aided diagnosis (CAD) systems in screening programs enhance malignancy prediction, assist radiologists in decision-making, and reduce inter-reader variability. However, limited research has explored the analysis of repeated annual exams of indeterminate lung nodules to improve accuracy. We introduced a novel spatio-temporal deep learning framework, the global attention convolutional recurrent neural network (globAttCRNN), to predict indeterminate lung nodule malignancy using serial screening computed tomography (CT) images from the National Lung Screening Trial (NLST) dataset. The model comprises a lightweight 2D convolutional neural network for spatial feature extraction and a recurrent neural network with a global attention module to capture the temporal evolution of lung nodules. Additionally, we proposed new strategies to handle missing data in the temporal dimension to mitigate potential biases arising from missing time steps, including temporal augmentation and temporal dropout. Our model achieved an area under the receiver operating characteristic curve (AUC-ROC) of 0.954 in an independent test set of 175 lung nodules, each detected in multiple CT scans over patient follow-up, outperforming baseline single-time and multiple-time architectures. The temporal global attention module prioritizes informative time points, enabling the model to capture key spatial and temporal features while ignoring irrelevant or redundant information. Our evaluation emphasizes its potential as a valuable tool for the diagnosis and stratification of patients at risk of lung cancer.
Page 18 of 66652 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.