Sort by:
Page 31 of 1411410 results

Comparison of Outcomes Between Ablation and Lobectomy in Stage IA Non-Small Cell Lung Cancer: A Retrospective Multicenter Study.

Xu B, Chen Z, Liu D, Zhu Z, Zhang F, Lin L

pubmed logopapersAug 28 2025
Image-guided thermal ablation (IGTA) has been increasingly used in patients with stage IA non-small cell lung cancer (NSCLC) without surgical contraindications, but its long-term outcomes compared to lobectomy remain unknown. This study aims to evaluate the long-term outcomes of IGTA versus lobectomy and explore which patients may benefit most from ablation. After propensity score matching, a total of 290 patients with stage IA NSCLC between 2015 and 2023 were included. Progression-free survival (PFS) and overall survival (OS) were estimated using the Kaplan-Meier method. A Markov model was constructed to evaluate cost-effectiveness. Finally, a radiomics model based on preoperative computed tomography (CT) was developed to perform risk stratification. After matching, the median follow-up intervals were 34.8 months for the lobectomy group and 47.2 months for the ablation group. There were no significant differences between the groups in terms of 5-year PFS (hazard ratio [HR], 1.83; 95% CI, 0.86-3.92; p = 0.118) or OS (HR, 2.44; 95% CI, 0.87-6.63; p = 0.092). In low-income regions, lobectomy was not cost-effective in 99% of simulations. The CT-based radiomics model outperformed the traditional TNM model (AUC, 0.759 vs. 0.650; p < 0.01). Moreover, disease-free survival was significantly lower in the high-risk group than in the low-risk group (p = 0.009). This study comprehensively evaluated IGTA versus lobectomy in terms of survival outcomes, cost-effectiveness, and prognostic prediction. The findings suggest that IGTA may be a safe and feasible alternative to conventional surgery for carefully selected patients.

Perivascular inflammation in the progression of aortic aneurysms in Marfan syndrome.

Sowa H, Yagi H, Ueda K, Hashimoto M, Karasaki K, Liu Q, Kurozumi A, Adachi Y, Yanase T, Okamura S, Zhai B, Takeda N, Ando M, Yamauchi H, Ito N, Ono M, Akazawa H, Komuro I

pubmed logopapersAug 28 2025
Inflammation plays important roles in the pathogenesis of vascular diseases. We here show the involvement of perivascular inflammation in aortic dilatation of Marfan syndrome (MFS). In the aorta of MFS patients and Fbn1C1041G/+ mice, macrophages markedly accumulated in periaortic tissues with increased inflammatory cytokine expression. Metabolic inflammatory stress induced by a high-fat diet (HFD) enhanced vascular inflammation predominantly in periaortic tissues and accelerated aortic dilatation in Fbn1C1041G/+ mice, both of which were inhibited by low-dose pitavastatin. HFD feeding also intensifies structural disorganization of the tunica media in Fbn1C1041G/+ mice, including elastic fiber fragmentation, fibrosis, and proteoglycan accumulation, along with increased activation of TGF-β downstream targets. Pitavastatin treatment mitigated these alterations. For non-invasive assessment of PVAT inflammation in a clinical setting, we developed an automated analysis program for CT images using machine learning techniques to calculate the perivascular fat attenuation index of the ascending aorta (AA-FAI), correlating with periaortic fat inflammation. The AA-FAI was significantly higher in patients with MFS compared to patients without hereditary connective tissue disorders. These results suggest that perivascular inflammation contributes to aneurysm formation in MFS and might be a potential target for preventing and treating vascular events in MFS.

Advancements in biomedical rendering: A survey on AI-based denoising techniques.

Denisova E, Francia P, Nardi C, Bocchi L

pubmed logopapersAug 28 2025
A recent investigation into deep learning-based denoising for early Monte Carlo (MC) Path Tracing in computed tomography (CT) volume visualization yielded promising quantitative outcomes but inconsistent qualitative assessments. This research probes the underlying causes of this incongruity by deploying a web-based SurveyMonkey questionnaire distributed among healthcare professionals. The survey targeted radiologists, residents, orthopedic surgeons, and veterinarians, leveraging the authors' professional networks for dissemination. To evaluate perceptions, the questionnaire featured randomized sections gauging attitudes towards AI-enhanced image and video quality, confidence in reference images, and clinical applicability. Seventy-four participants took part, encompassing a spectrum of experience levels: <1 year (n=11), 1-3 years (n=27), 3-5 years (n=12), and >5 years (n=24). A substantial majority (77%) expressed a preference for AI-enhanced images over traditional MC estimates, a preference influenced by participant experience (adjusted OR 0.81, 95% CI 0.67-0.98, p=0.033). Experience correlates with confidence in AI-generated images (adjusted OR 0.98, 95% CI 0.95-1, p=0.018-0.047) and satisfaction with video previews, both with and without AI (adjusted OR 0.96-0.98, 95% CI 0.92-1, p = 0.033-0.048). Significant monotonic relationships emerged between experience, confidence (σ= 0.25-0.26, p = 0.025-0.029), and satisfaction (σ= 0.23-0.24, p = 0.037-0.046). The findings underscore the potential of AI post-processing to improve the rendering of biomedical volumes, noting enhanced confidence and satisfaction among experienced participants. The study reveals that participants' preferences may not align perfectly with quality metrics such as peak signal-to-noise ratio and structural similarity index, highlighting nuances in evaluating AI's qualitative impact on CT image denoising.

Development of a Large-Scale Dataset of Chest Computed Tomography Reports in Japanese and a High-Performance Finding Classification Model: Dataset Development and Validation Study.

Yamagishi Y, Nakamura Y, Kikuchi T, Sonoda Y, Hirakawa H, Kano S, Nakamura S, Hanaoka S, Yoshikawa T, Abe O

pubmed logopapersAug 28 2025
Recent advances in large language models have highlighted the need for high-quality multilingual medical datasets. Although Japan is a global leader in computed tomography (CT) scanner deployment and use, the absence of large-scale Japanese radiology datasets has hindered the development of specialized language models for medical imaging analysis. Despite the emergence of multilingual models and language-specific adaptations, the development of Japanese-specific medical language models has been constrained by a lack of comprehensive datasets, particularly in radiology. This study aims to address this critical gap in Japanese medical natural language processing resources, for which a comprehensive Japanese CT report dataset was developed through machine translation, to establish a specialized language model for structured classification. In addition, a rigorously validated evaluation dataset was created through expert radiologist refinement to ensure a reliable assessment of model performance. We translated the CT-RATE dataset (24,283 CT reports from 21,304 patients) into Japanese using GPT-4o mini. The training dataset consisted of 22,778 machine-translated reports, and the validation dataset included 150 reports carefully revised by radiologists. We developed CT-BERT-JPN, a specialized Bidirectional Encoder Representations from Transformers (BERT) model for Japanese radiology text, based on the "tohoku-nlp/bert-base-japanese-v3" architecture, to extract 18 structured findings from reports. Translation quality was assessed with Bilingual Evaluation Understudy (BLEU) and Recall-Oriented Understudy for Gisting Evaluation (ROUGE) scores and further evaluated by radiologists in a dedicated human-in-the-loop experiment. In that experiment, each of a randomly selected subset of reports was independently reviewed by 2 radiologists-1 senior (postgraduate year [PGY] 6-11) and 1 junior (PGY 4-5)-using a 5-point Likert scale to rate: (1) grammatical correctness, (2) medical terminology accuracy, and (3) overall readability. Inter-rater reliability was measured via quadratic weighted kappa (QWK). Model performance was benchmarked against GPT-4o using accuracy, precision, recall, F1-score, ROC (receiver operating characteristic)-AUC (area under the curve), and average precision. General text structure was preserved (BLEU: 0.731 findings, 0.690 impression; ROUGE: 0.770-0.876 findings, 0.748-0.857 impression), though expert review identified 3 categories of necessary refinements-contextual adjustment of technical terms, completion of incomplete translations, and localization of Japanese medical terminology. The radiologist-revised translations scored significantly higher than raw machine translations across all dimensions, and all improvements were statistically significant (P<.001). CT-BERT-JPN outperformed GPT-4o on 11 of 18 findings (61%), achieving perfect F1-scores for 4 conditions and F1-score >0.95 for 14 conditions, despite varied sample sizes (7-82 cases). Our study established a robust Japanese CT report dataset and demonstrated the effectiveness of a specialized language model in structured classification of findings. This hybrid approach of machine translation and expert validation enabled the creation of large-scale datasets while maintaining high-quality standards. This study provides essential resources for advancing medical artificial intelligence research in Japanese health care settings, using datasets and models publicly available for research to facilitate further advancement in the field.

Classification of computed tomography scans: a novel approach implementing an enforced random forest algorithm.

Biondi M, Bortoli E, Marini L, Avitabile R, Bartoli A, Busatti E, Tozzi A, Cimmino MC, Piccini L, Giusti EB, Guasti A

pubmed logopapersAug 28 2025
Medical imaging faces critical challenges in radiation dose management and protocol standardisation. This study introduces a machine learning approach using a random forest algorithm to classify Computed Tomography (CT) scan protocols. By leveraging dose monitoring system data, we provide a data-driven solution for establishing Diagnostic Reference Levels while minimising computational resources. We developed a classification workflow using a Random Forest Classifier to categorise CT scans into anatomical regions: head, thorax, abdomen, spine, and complex multi-region scans (thorax + abdomen and total body). The methodology featured an iterative "human-in-the-loop" refinement process involving data preprocessing, machine learning algorithm training, expert validation, and protocol classification. After training the initial model, we applied the methodology to a new, independent dataset. By analysing 52,982 CT scan records from 11 imaging devices across five hospitals, we train the classificator to distinguish multiple anatomical regions, categorising scans into head, thorax, abdomen, and spine. The final validation on the new database confirmed the model's robustness, achieving a 97 % accuracy. This research introduces a novel medical imaging protocol classification approach by shifting from manual, time-consuming processes to a data-driven approach integrating a random forest algorithm. Our study presents a transformative approach to CT scan protocol classification, demonstrating the potential of data-driven methodologies in medical imaging. We have created a framework for managing protocol classification and establishing DRL by integrating computational intelligence with clinical expertise. Future research will explore applying this methodology to other radiological procedures.

Deep Learning-Based 3D and 2D Approaches for Skeletal Muscle Segmentation on Low-Dose CT Images.

Timpano G, Veltri P, Vizza P, Cascini GL, Manti F

pubmed logopapersAug 27 2025
Automated segmentation of skeletal muscle from computed tomography (CT) images is essential for large-scale quantitative body composition analysis. However, manual segmentation is time-consuming and impractical for routine or high-throughput use. This study presents a systematic comparison of two-dimensional (2D) and three-dimensional (3D) deep learning architectures for segmenting skeletal muscle at the anatomically standardized level of the third lumbar vertebra (L3) in low-dose computed tomography (LDCT) scans. We implemented and evaluated the DeepLabv3+ (2D) and UNet3+ (3D) architectures on a curated dataset of 537 LDCT scans, applying preprocessing protocols, L3 slice selection, and region of interest extraction. The model performance was evaluated using a comprehensive set of evaluation metrics, including Dice similarity coefficient (DSC) and 95th percentile Hausdorff distance (HD95). DeepLabv3+ achieved the highest segmentation accuracy (DSC = 0.982 ± 0.010, HD95 = 1.04 ± 0.46 mm), while UNet3+ showed competitive performance (DSC = 0.967 ± 0.013, HD95 = 1.27 ± 0.58 mm) with 26 times fewer parameters (1.27 million vs. 33.6 million) and lower inference time. Both models exceeded or matched results reported in the recent CT-based muscle segmentation literature. This work offers practical insights into architecture selection for automated LDCT-based muscle segmentation workflows, with a focus on the L3 vertebral level, which remains the gold standard in muscle quantification protocols.

Intelligent Head and Neck CTA Report Quality Detection with Large Language Models.

Tian L, Lu Y, Fei X, Lu J

pubmed logopapersAug 27 2025
This study aims to identify common errors in head and neck CTA reports using GPT-4, ERNIE Bot, and SparkDesk, evaluating their potential for supporting quality control in Chinese radiological reports. This study collected 10,000 head and neck CTA imaging reports from Xuanwu Hospital (Dataset 1) and 5000 multi-center reports (Dataset 2). We identified six common types of errors and detected them using three large language models: GPT-4, ERNIE Bot, and SparkDesk. The overall quality of the reports was assessed using a 5-point Likert scale. We conducted a Wilcoxon rank-sum test and Friedman test to compare error detection rates and evaluate the models' performance on different error types and overall scores. For Dataset 2, after manual review, we annotated the six error types and provided overall scoring, while also recording the time taken for manual scoring and model detection. Model performance was evaluated using accuracy, precision, recall, and F1 score. The intraclass correlation coefficient measured consistency between manual and model scores, and ANOVA compared evaluation times. In Dataset 1, the error detection rates for final reports were significantly lower than those for preliminary reports across all three model types. Friedman's test indicated significant differences in error rates among the three models. In Dataset 2, the detection accuracy of the three LLMs for the six error types was above 95%. GPT-4 had a moderate consistency with manual scores (ICC = 0.517), while ERNIE Bot and SparkDesk showed slightly lower consistency (ICC = 0.431 and 0.456, respectively; P < 0.001). The models evaluated one hundred radiology reports significantly faster than human reviewers. LLMs can differentiate the quality of radiology reports and identify error types, significantly enhancing the efficiency of quality control reviews and providing substantial research and practical value in this field.

Ultra-Low-Dose CTPA Using Sparse Sampling CT Combined with the U-Net for Deep Learning-Based Artifact Reduction: An Exploratory Study.

Sauter AP, Thalhammer J, Meurer F, Dorosti T, Sasse D, Ritter J, Leonhardt Y, Pfeiffer F, Schaff F, Pfeiffer D

pubmed logopapersAug 27 2025
This retrospective study evaluates U-Net-based artifact reduction for dose-reduced sparse-sampling CT (SpSCT) in terms of image quality and diagnostic performance using a reader study and automated detection. CT pulmonary angiograms from 89 patients were used to generate SpSCT data with 16 to 512 views. Twenty patients were reserved for a reader study and test set, the remaining 69 were used to train (53) and validate (16) a dual-frame U-Net for artifact reduction. U-Net post-processed images were assessed for image quality, diagnostic performance, and automated pulmonary embolism (PE) detection using the top-performing network from the 2020 RSNA PE detection challenge. Statistical comparisons were made using two-sided Wilcoxon signed-rank and DeLong two-sided tests. Post-processing with the dual-frame U-Net significantly improved image quality in the internal test set, with a structural similarity index of 0.634/0.378/0.234/0.152 for FBP and 0.894/0.892/0.866/0.778 for U-Net at 128/64/32/16 views, respectively. The reader study showed significantly enhanced image quality (3.15 vs. 3.53 for 256 views, 0.00 vs. 2.52 for 32 views), increased diagnostic confidence (0.00 vs. 2.38 for 32 views), and fewer artifacts across all subsets (P < 0.05). Diagnostic performance, measured by the Sørensen-Dice coefficient, was significantly better for 64- and 32-view images (0.23 vs. 0.44 and 0.00 vs. 0.09, P < 0.05). Automated PE detection was better at fewer views (64 views: 0.77 vs. 0.80, 16 views: 0.59 vs. 0.80), although the differences were not statistically significant. U-Net-based post-processing of SpSCT data significantly enhances image quality and diagnostic performance, supporting substantial dose reduction in CT pulmonary angiography.

PWLS-SOM: alternative PWLS reconstruction for limited-view CT by strategic optimization of a deep learning model.

Chen C, Zhang L, Xing Y, Chen Z

pubmed logopapersAug 27 2025
While deep learning (DL) methods have exhibited promising results in mitigating streaking artifacts caused by limited-view computed tomography (CT), their generalization to practical applications remains challenging. To address this challenge, we aim to develop a novel approach that integrates DL priors with targeted-case data consistency for improved artifact suppression and robust reconstruction.&#xD;Approach: We propose an alternative Penalized Weighted Least Squares reconstruction framework by Strategic Optimization of a DL Model (PWLS-SOM). This framework combines data-driven DL priors with data consistency constraints in a three-stage process: (1) Group-level embedding: DL network parameters are optimized on a large-scale paired dataset to learn general artifact elimination. (2) Significance evaluation: A novel significance score quantifies the contribution of DL model parameters, guiding the subsequent strategic adaptation. (3) Individual-level consistency adaptation: PWLS-driven strategic optimization further adapts DL parameters for target-specific projection data.&#xD;Main Results: Experiments were conducted on sparse-view (90 views) circular trajectory CT data and a multi-segment linear trajectory CT scan with a mixed data missing problem. PWLS-SOM reconstruction demonstrated superior generalization across variations in patients, anatomical structures, and data distributions. It outperformed supervised DL methods in recovering contextual structures and adapting to practical CT scenarios. The method was validated with real experiments on a dead rat, showcasing its applicability to real-world CT scans.&#xD;Significance: PWLS-SOM reconstruction advances the field of limited-view CT reconstruction by uniting DL priors with PWLS adaptation. This approach facilitates robust and personalized imaging. The introduction of the significance score provides an efficient metric to evaluate generalization and guide the strategic optimization of DL parameters, enhancing adaptability across diverse data and practical imaging conditions.

E-TBI: explainable outcome prediction after traumatic brain injury using machine learning.

Ngo TH, Tran MH, Nguyen HB, Hoang VN, Le TL, Vu H, Tran TK, Nguyen HK, Can VM, Nguyen TB, Tran TH

pubmed logopapersAug 27 2025
Traumatic brain injury (TBI) is one of the most prevalent health conditions, with severity assessment serving as an initial step for management, prognosis, and targeted therapy. Existing studies on automated outcome prediction using machine learning (ML) often overlook the importance of TBI features in decision-making and the challenges posed by limited and imbalanced training data. Furthermore, many attempts have focused on quantitatively evaluating ML algorithms without explaining the decisions, making the outcomes difficult to interpret and apply for less-experienced doctors. This study presents a novel supportive tool, named E-TBI (explainable outcome prediction after TBI), designed with a user-friendly web-based interface to assist doctors in outcome prediction after TBI using machine learning. The tool is developed with the capability to visualize rules applied in the decision-making process. At the tool's core is a feature selection and classification module that receives multimodal data from TBI patients (demographic data, clinical data, laboratory test results, and CT findings). It then infers one of four TBI severity levels. This research investigates various machine learning models and feature selection techniques, ultimately identifying the optimal combination of gradient boosting machine and random forest for the task, which we refer to as GBMRF. This method enabled us to identify a small set of essential features, reducing patient testing costs by 35%, while achieving the highest accuracy rates of 88.82% and 89.78% on two datasets (a public TBI dataset and our self-collected dataset, TBI_MH103). Classification modules are available at https://github.com/auverngo110/Traumatic_Brain_Injury_103 .
Page 31 of 1411410 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.