Sort by:
Page 39 of 1411410 results

Application of deep learning reconstruction at prone position chest scanning of early interstitial lung disease.

Zhao R, Wang Y, Wang J, Wang Z, Xiao R, Ming Y, Piao S, Wang J, Song L, Xu Y, Ma Z, Fan P, Sui X, Song W

pubmed logopapersAug 19 2025
Timely intervention of interstitial lung disease (ILD) was promising for attenuating the lung function decline and improving clinical outcomes. The prone position HRCT is essential for early diagnosis of ILD, but limited by its high radiation exposure. This study was aimed to explore whether deep learning reconstruction (DLR) could keep the image quality and reduce the radiation dose compared with hybrid iterative reconstruction (HIR) in prone position scanning for patients of early-stage ILD. This study prospectively enrolled 21 patients with early-stage ILD. All patients underwent high-resolution CT (HRCT) and low-dose CT (LDCT) scans. HRCT images were reconstructed with HIR using standard settings, and LDCT images were reconstructed with DLR (lung/bone kernel) in a mild, standard, or strong setting. Overall image quality, image noise, streak artifacts, and visualization of normal and abnormal ILD features were analysed. The effective dose of LDCT was 1.22 ± 0.09 mSv, 63.7% less than the HRCT dose. The objective noise of the LDCT DLR images was 35.9-112.6% that of the HRCT HIR images. The LDCT DLR was comparable to the HRCT HIR in terms of overall image quality. LDCT DLR (bone, strong) visualization of bronchiectasis and/or bronchiolectasis was significantly weaker than that of HRCT HIR (p = 0.046). The LDCT DLR (all settings) did not significantly differ from the HRCT HIR in the evaluation of other abnormal features, including ground glass opacities (GGOs), architectural distortion, reticulation and honeycombing. With 63.7% reduction of radiation dose, the overall image quality of LDCT DLR was comparable to HRCT HIR in prone scanning for early ILD patients. This study supported that DLR was promising for maintaining image quality under a lower radiation dose in prone scanning, and it offered valuable insights for the selection of images reconstruction algorithms for the diagnosis and follow-up of early ILD.

Lung adenocarcinoma subtype classification based on contrastive learning model with multimodal integration.

Wang C, Liu L, Fan C, Zhang Y, Mai Z, Li L, Liu Z, Tian Y, Hu J, Elazab A

pubmed logopapersAug 19 2025
Accurately identifying the stages of lung adenocarcinoma is essential for selecting the most appropriate treatment plans. Nonetheless, this task is complicated due to challenges such as integrating diverse data, similarities among subtypes, and the need to capture contextual features, making precise differentiation difficult. We address these challenges and propose a multimodal deep neural network that integrates computed tomography (CT) images, annotated lesion bounding boxes, and electronic health records. Our model first combines bounding boxes with precise lesion location data and CT scans, generating a richer semantic representation through feature extraction from regions of interest to enhance localization accuracy using a vision transformer module. Beyond imaging data, the model also incorporates clinical information encoded using a fully connected encoder. Features extracted from both CT and clinical data are optimized for cosine similarity using a contrastive language-image pre-training module, ensuring they are cohesively integrated. In addition, we introduce an attention-based feature fusion module that harmonizes these features into a unified representation to fuse features from different modalities further. This integrated feature set is then fed into a classifier that effectively distinguishes among the three types of adenocarcinomas. Finally, we employ focal loss to mitigate the effects of unbalanced classes and contrastive learning loss to enhance feature representation and improve the model's performance. Our experiments on public and proprietary datasets demonstrate the efficiency of our model, achieving a superior validation accuracy of 81.42% and an area under the curve of 0.9120. These results significantly outperform recent multimodal classification approaches. The code is available at https://github.com/fancccc/LungCancerDC .

Early Detection of Cardiovascular Disease in Chest Population Screening: Challenges for a Rapidly Emerging Cardiac CT Application.

Walstra ANH, Gratama JWC, Heuvelmans MA, Oudkerk M

pubmed logopapersAug 18 2025
While lung cancer screening (LCS) reduces lung cancer-related mortality in high-risk individuals, cardiovascular disease (CVD) remains a leading cause of death due to shared risk factors such as smoking and age. Coronary artery calcium (CAC) assessment offers an opportunity for concurrent cardiovascular screening, with higher CAC scores indicating increased CVD risk and mortality. Despite guidelines recommending CAC-scoring on all non-contrast chest CT scans, a lack of standardization leads to underreporting and missed opportunities for preventive care. Routine CAC-scoring in LCS can enable personalized CVD management and reduce unnecessary treatments. However, challenges persist in achieving adequate diagnostic quality with one combined image acquisition for both lung and cardiovascular assessment. Advancements in CT technology have improved CAC quantification on low-dose CT scans. Electron-beam tomography, valued for superior temporal resolution, was replaced by multi-detector CT for better spatial resolution and general usability. Dual-source CT further improved temporal resolution and reduced motion artifacts, making non-gated CT protocols for CAC-assessment possible. Additionally, artificial intelligence-based CAC quantification can reduce the added workload of cardiovascular screening within LCS programs. This review explores recent advancements in cardiac CT technologies that address prior challenges in opportunistic CVD screening and considers key factors for integrating CVD screening into LCS programs, aiming for high-quality standardization in CAC reporting.

One-Year Change in Quantitative Computed Tomography Is Associated with Meaningful Outcomes in Fibrotic Lung Disease.

Koslow M, Baraghoshi D, Swigris JJ, Brown KK, Fernández Pérez ER, Huie TJ, Keith RC, Mohning MP, Solomon JJ, Yunt ZX, Manco G, Lynch DA, Humphries SM

pubmed logopapersAug 18 2025
Whether change in fibrosis on high-resolution CT (HRCT) is associated with near- and longer-term outcomes in patients with fibrotic interstitial lung disease (fILD) remains unclear. We evaluated the association between 1-year change in quantitative fibrosis scores (DTA) and subsequent forced vital capacity (FVC) and survival in patients with fILD. The primary cohort included fILD patients evaluated from 2017-2020 with baseline and 1-year follow-up HRCT and FVC. Associations between DTA change and subsequent FVC were assessed using linear mixed models. Transplant-free survival was assessed using Cox proportional hazards models. The Pulmonary Fibrosis Foundation (PFF-PR) Patient Registry served as the validation cohort. The primary cohort included 407 patients (median [IQR] age, 70.5 [64.8, 75.9] years; 214 male). One-year increase in DTA was associated with subsequent FVC decline and transplant-free survival. The largest effect on FVC was observed in patients with low baseline DTA scores in whom a 5% increase in DTA over 1 year was associated with a change in FVC of -91 mL/year [95% CI: -117, -65] (vs stable DTA: -49 mL/year [95% CI: -69, -29]; p=0.0002). The hazard ratio for transplant-free survival for a 5% increase in DTA over one year was 1.45 [95% CI: 1.25, 1.68]. Findings were confirmed in the validation cohort. One-year change in DTA score is associated with future disease trajectory and transplant-free survival in patients with fILD. DTA could be a useful trial endpoint, cohort enrichment tool, and metric to incorporate into clinical care.

Interactive AI annotation of medical images in a virtual reality environment.

Orsmaa L, Saukkoriipi M, Kangas J, Rasouli N, Järnstedt J, Mehtonen H, Sahlsten J, Jaskari J, Kaski K, Raisamo R

pubmed logopapersAug 18 2025
Artificial intelligence (AI) achieves high-quality annotations of radiological images, yet often lacks the robustness required in clinical practice. Interactive annotation starts with an AI-generated delineation, allowing radiologists to refine it with feedback, potentially improving precision and reliability. These techniques have been explored in two-dimensional desktop environments, but are not validated by radiologists or integrated with immersive visualization technologies. We used a Virtual Reality (VR) system to determine whether (1) the annotation quality improves when radiologists can edit the AI annotation and (2) whether the extra work done by editing is worthwhile. We evaluated the clinical feasibility of an interactive VR approach to annotate mandibular and mental foramina on segmented 3D mandibular models. Three experienced dentomaxillofacial radiologists reviewed AI-generated annotations and, when needed, refined them at the voxel level in 3D space through click-based interactions until clinical standards were met. Our results indicate that integrating expert feedback within an immersive VR environment enhances annotation accuracy, improves clinical usability, and offers valuable insights for developing medical image analysis systems incorporating radiologist input. This study is the first to compare the quality of original and interactive AI annotation and to use radiologists' opinions as the measure. More research is needed for generalization.

A systematic review of comparisons of AI and radiologists in the diagnosis of HCC in multiphase CT: implications for practice.

Younger J, Morris E, Arnold N, Athulathmudali C, Pinidiyapathirage J, MacAskill W

pubmed logopapersAug 18 2025
This systematic review aims to examine the literature of artificial intelligence (AI) algorithms in the diagnosis of hepatocellular carcinoma (HCC) among focal liver lesions compared to radiologists on multiphase CT images, focusing on performance metrics that include sensitivity and specificity as a minimum. We searched Embase, PubMed and Web of Science for studies published from January 2018 to May 2024. Eligible studies evaluated AI algorithms for diagnosing HCC using multiphase CT, with radiologist interpretation as a comparator. The performance of AI models and radiologists was recorded using sensitivity and specificity from each study. TRIPOD + AI was used for quality appraisal and PROBAST was used to assess the risk of bias. Seven studies out of the 3532 reviewed were included in the review. All seven studies analysed the performance of AI models and radiologists. Two studies additionally assessed performance with and without supplementary clinical information to assist the AI model in diagnosis. Three studies additionally evaluated the performance of radiologists with assistance of the AI algorithm in diagnosis. The AI algorithms demonstrated a sensitivity ranging from 63.0 to 98.6% and a specificity of 82.0-98.6%. In comparison, junior radiologists (with less than 10 years of experience) exhibited a sensitivity of 41.2-92.0% and a specificity of 72.2-100%, while senior radiologists (with more than 10 years of experience) achieved a sensitivity between 63.9% and 93.7% and a specificity ranging from 71.9 to 99.9%. AI algorithms demonstrate adequate performance in the diagnosis of HCC from focal liver lesions on multiphase CT images. Across geographic settings, AI could help streamline workflows and improve access to timely diagnosis. However, thoughtful implementation strategies are still needed to mitigate bias and overreliance.

CTFlow: Video-Inspired Latent Flow Matching for 3D CT Synthesis

Jiayi Wang, Hadrien Reynaud, Franciskus Xaverius Erick, Bernhard Kainz

arxiv logopreprintAug 18 2025
Generative modelling of entire CT volumes conditioned on clinical reports has the potential to accelerate research through data augmentation, privacy-preserving synthesis and reducing regulator-constraints on patient data while preserving diagnostic signals. With the recent release of CT-RATE, a large-scale collection of 3D CT volumes paired with their respective clinical reports, training large text-conditioned CT volume generation models has become achievable. In this work, we introduce CTFlow, a 0.5B latent flow matching transformer model, conditioned on clinical reports. We leverage the A-VAE from FLUX to define our latent space, and rely on the CT-Clip text encoder to encode the clinical reports. To generate consistent whole CT volumes while keeping the memory constraints tractable, we rely on a custom autoregressive approach, where the model predicts the first sequence of slices of the volume from text-only, and then relies on the previously generated sequence of slices and the text, to predict the following sequence. We evaluate our results against state-of-the-art generative CT model, and demonstrate the superiority of our approach in terms of temporal coherence, image diversity and text-image alignment, with FID, FVD, IS scores and CLIP score.

Advancing deep learning-based segmentation for multiple lung cancer lesions in real-world multicenter CT scans.

Rafael-Palou X, Jimenez-Pastor A, Martí-Bonmatí L, Muñoz-Nuñez CF, Laudazi M, Alberich-Bayarri Á

pubmed logopapersAug 18 2025
Accurate segmentation of lung cancer lesions in computed tomography (CT) is essential for precise diagnosis, personalized therapy planning, and treatment response assessment. While automatic segmentation of the primary lung lesion has been widely studied, the ability to segment multiple lesions per patient remains underexplored. In this study, we address this gap by introducing a novel, automated approach for multi-instance segmentation of lung cancer lesions, leveraging a heterogeneous cohort with real-world multicenter data. We analyzed 1,081 retrospectively collected CT scans with 5,322 annotated lesions (4.92 ± 13.05 lesions per scan). The cohort was stratified into training (n = 868) and testing (n = 213) subsets. We developed an automated three-step pipeline, including thoracic bounding box extraction, multi-instance lesion segmentation, and false positive reduction via a novel multiscale cascade classifier to filter spurious and non-lesion candidates. On the independent test set, our method achieved a Dice similarity coefficient of 76% for segmentation and a lesion detection sensitivity of 85%. When evaluated on an external dataset of 188 real-world cases, it achieved a Dice similarity coefficient of 73%, and a lesion detection sensitivity of 85%. Our approach accurately detected and segmented multiple lung cancer lesions per patient on CT scans, demonstrating robustness across an independent test set and an external real-world dataset. AI-driven segmentation comprehensively captures lesion burden, enhancing lung cancer assessment and disease monitoring KEY POINTS: Automatic multi-instance lung cancer lesion segmentation is underexplored yet crucial for disease assessment. Developed a deep learning-based segmentation pipeline trained on multi-center real-world data, which reached 85% sensitivity at external validation. Thoracic bounding box and false positive reduction techniques improved the pipeline's segmentation performance.

Multi-Phase Automated Segmentation of Dental Structures in CBCT Using a Lightweight Auto3DSeg and SegResNet Implementation

Dominic LaBella, Keshav Jha, Jared Robbins, Esther Yu

arxiv logopreprintAug 18 2025
Cone-beam computed tomography (CBCT) has become an invaluable imaging modality in dentistry, enabling 3D visualization of teeth and surrounding structures for diagnosis and treatment planning. Automated segmentation of dental structures in CBCT can efficiently assist in identifying pathology (e.g., pulpal or periapical lesions) and facilitate radiation therapy planning in head and neck cancer patients. We describe the DLaBella29 team's approach for the MICCAI 2025 ToothFairy3 Challenge, which involves a deep learning pipeline for multi-class tooth segmentation. We utilized the MONAI Auto3DSeg framework with a 3D SegResNet architecture, trained on a subset of the ToothFairy3 dataset (63 CBCT scans) with 5-fold cross-validation. Key preprocessing steps included image resampling to 0.6 mm isotropic resolution and intensity clipping. We applied an ensemble fusion using Multi-Label STAPLE on the 5-fold predictions to infer a Phase 1 segmentation and then conducted tight cropping around the easily segmented Phase 1 mandible to perform Phase 2 segmentation on the smaller nerve structures. Our method achieved an average Dice of 0.87 on the ToothFairy3 challenge out-of-sample validation set. This paper details the clinical context, data preparation, model development, results of our approach, and discusses the relevance of automated dental segmentation for improving patient care in radiation oncology.

Development of a lung perfusion automated quantitative model based on dual-energy CT pulmonary angiography in patients with chronic pulmonary thromboembolism.

Xi L, Wang J, Liu A, Ni Y, Du J, Huang Q, Li Y, Wen J, Wang H, Zhang S, Zhang Y, Zhang Z, Wang D, Xie W, Gao Q, Cheng Y, Zhai Z, Liu M

pubmed logopapersAug 18 2025
To develop PerAIDE, an AI-driven system for automated analysis of pulmonary perfusion blood volume (PBV) using dual-energy computed tomography pulmonary angiography (DE-CTPA) in patients with chronic pulmonary thromboembolism (CPE). In this prospective observational study, 32 patients with chronic thromboembolic pulmonary disease (CTEPD) and 151 patients with chronic thromboembolic pulmonary hypertension (CTEPH) were enrolled between January 2022 and July 2024. PerAIDE was developed to automatically quantify three distinct perfusion patterns-normal, reduced, and defective-on DE-CTPA images. Two radiologists independently assessed PBV scores. Follow-up imaging was conducted 3 months after balloon pulmonary angioplasty (BPA). PerAIDE demonstrated high agreement with the radiologists (intraclass correlation coefficient = 0.778) and reduced analysis time significantly (31 ± 3 s vs. 15 ± 4 min, p < 0.001). CTEPH patients had greater perfusion defects than CTEPD (0.35 vs. 0.29, p < 0.001), while reduced perfusion was more prevalent in CTEPD (0.36 vs. 0.30, p < 0.001). Perfusion defects correlated positively with pulmonary vascular resistance (ρ = 0.534) and mean pulmonary artery pressure (ρ = 0.482), and negatively with oxygenation index (ρ = -0.441). PerAIDE effectively differentiated CTEPH from CTEPD (AUC = 0.809, 95% CI: 0.745-0.863). At the 3-month post-BPA, a significant reduction in perfusion defects was observed (0.36 vs. 0.33, p < 0.01). CTEPD and CTEPH exhibit distinct perfusion phenotypes on DE-CTPA. PerAIDE reliably quantifies perfusion abnormalities and correlates strongly with clinical and hemodynamic markers of CPE severity. ClinicalTrials.gov, NCT06526468. Registered 28 August 2024- Retrospectively registered, https://clinicaltrials.gov/study/NCT06526468?cond=NCT06526468&rank=1 . PerAIDE is a dual-energy computed tomography pulmonary angiography (DE-CTPA) AI-driven system that rapidly and accurately assesses perfusion blood volume in patients with chronic pulmonary thromboembolism, effectively distinguishing between CTEPD and CTEPH phenotypes and correlating with disease severity and therapeutic response. Right heart catheterization for definitive diagnosis of chronic pulmonary thromboembolism (CPE) is invasive. PerAIDE-based perfusion defects correlated with disease severity to aid CPE-treatment assessment. CTEPH demonstrates severe perfusion defects, while CTEPD displays predominantly reduced perfusion. PerAIDE employs a U-Net-based adaptive threshold method, which achieves alignment with and faster processing relative to manual evaluation.
Page 39 of 1411410 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.