Sort by:
Page 88 of 1421416 results

Radiation Dose Reduction and Image Quality Improvement of UHR CT of the Neck by Novel Deep-learning Image Reconstruction.

Messerle DA, Grauhan NF, Leukert L, Dapper AK, Paul RH, Kronfeld A, Al-Nawas B, Krüger M, Brockmann MA, Othman AE, Altmann S

pubmed logopapersJun 30 2025
We evaluated a dedicated dose-reduced UHR-CT for head and neck imaging, combined with a novel deep learning reconstruction algorithm to assess its impact on image quality and radiation exposure. Retrospective analysis of ninety-eight consecutive patients examined using a new body weight-adapted protocol. Images were reconstructed using adaptive iterative dose reduction and advanced intelligent Clear-IQ engine with an already established (DL-1) and a newly implemented reconstruction algorithm (DL-2). Additional thirty patients were scanned without body-weight-adapted dose reduction (DL-1-SD). Three readers evaluated subjective image quality regarding image quality and assessment of several anatomic regions. For objective image quality, signal-to-noise ratio and contrast-to-noise ratio were calculated for temporalis and masseteric muscle and the floor of the mouth. Radiation dose was evaluated by comparing the computed tomography dose index (CTDIvol) values. Deep learning-based reconstruction algorithms significantly improved subjective image quality (diagnostic acceptability: DL‑1 vs AIDR OR of 25.16 [6.30;38.85], p < 0.001 and DL‑2 vs AIDR 720.15 [410.14;> 999.99], p < 0.001). Although higher doses (DL-1-SD) resulted in significantly enhanced image quality, DL‑2 demonstrated significant superiority over all other techniques across all defined parameters (p < 0.001). Similar results were demonstrated for objective image quality, e.g. image noise (DL‑1 vs AIDR OR of 19.0 [11.56;31.24], p < 0.001 and DL‑2 vs AIDR > 999.9 [825.81;> 999.99], p < 0.001). Using weight-adapted kV reduction, very low radiation doses could be achieved (CTDIvol: 7.4 ± 4.2 mGy). AI-based reconstruction algorithms in ultra-high resolution head and neck imaging provide excellent image quality while achieving very low radiation exposure.

A Hierarchical Slice Attention Network for Appendicitis Classification in 3D CT Scans

Chia-Wen Huang, Haw Hwai, Chien-Chang Lee, Pei-Yuan Wu

arxiv logopreprintJun 29 2025
Timely and accurate diagnosis of appendicitis is critical in clinical settings to prevent serious complications. While CT imaging remains the standard diagnostic tool, the growing number of cases can overwhelm radiologists, potentially causing delays. In this paper, we propose a deep learning model that leverages 3D CT scans for appendicitis classification, incorporating Slice Attention mechanisms guided by external 2D datasets to enhance small lesion detection. Additionally, we introduce a hierarchical classification framework using pre-trained 2D models to differentiate between simple and complicated appendicitis. Our approach improves AUC by 3% for appendicitis and 5.9% for complicated appendicitis, offering a more efficient and reliable diagnostic solution compared to previous work.

MedRegion-CT: Region-Focused Multimodal LLM for Comprehensive 3D CT Report Generation

Sunggu Kyung, Jinyoung Seo, Hyunseok Lim, Dongyeong Kim, Hyungbin Park, Jimin Sung, Jihyun Kim, Wooyoung Jo, Yoojin Nam, Namkug Kim

arxiv logopreprintJun 29 2025
The recent release of RadGenome-Chest CT has significantly advanced CT-based report generation. However, existing methods primarily focus on global features, making it challenging to capture region-specific details, which may cause certain abnormalities to go unnoticed. To address this, we propose MedRegion-CT, a region-focused Multi-Modal Large Language Model (MLLM) framework, featuring three key innovations. First, we introduce Region Representative ($R^2$) Token Pooling, which utilizes a 2D-wise pretrained vision model to efficiently extract 3D CT features. This approach generates global tokens representing overall slice features and region tokens highlighting target areas, enabling the MLLM to process comprehensive information effectively. Second, a universal segmentation model generates pseudo-masks, which are then processed by a mask encoder to extract region-centric features. This allows the MLLM to focus on clinically relevant regions, using six predefined region masks. Third, we leverage segmentation results to extract patient-specific attributions, including organ size, diameter, and locations. These are converted into text prompts, enriching the MLLM's understanding of patient-specific contexts. To ensure rigorous evaluation, we conducted benchmark experiments on report generation using the RadGenome-Chest CT. MedRegion-CT achieved state-of-the-art performance, outperforming existing methods in natural language generation quality and clinical relevance while maintaining interpretability. The code for our framework is publicly available.

Non-contrast computed tomography radiomics model to predict benign and malignant thyroid nodules with lobe segmentation: A dual-center study.

Wang H, Wang X, Du YS, Wang Y, Bai ZJ, Wu D, Tang WL, Zeng HL, Tao J, He J

pubmed logopapersJun 28 2025
Accurate preoperative differentiation of benign and malignant thyroid nodules is critical for optimal patient management. However, conventional imaging modalities present inherent diagnostic limitations. To develop a non-contrast computed tomography-based machine learning model integrating radiomics and clinical features for preoperative thyroid nodule classification. This multicenter retrospective study enrolled 272 patients with thyroid nodules (376 thyroid lobes) from center A (May 2021-April 2024), using histopathological findings as the reference standard. The dataset was stratified into a training cohort (264 lobes) and an internal validation cohort (112 lobes). Additional prospective temporal (97 lobes, May-August 2024, center A) and external multicenter (81 lobes, center B) test cohorts were incorporated to enhance generalizability. Thyroid lobes were segmented along the isthmus midline, with segmentation reliability confirmed by an intraclass correlation coefficient (≥ 0.80). Radiomics feature extraction was performed using Pearson correlation analysis followed by least absolute shrinkage and selection operator regression with 10-fold cross-validation. Seven machine learning algorithms were systematically evaluated, with model performance quantified through the area under the receiver operating characteristic curve (AUC), Brier score, decision curve analysis, and DeLong test for comparison with radiologists interpretations. Model interpretability was elucidated using SHapley Additive exPlanations (SHAP). The extreme gradient boosting model demonstrated robust diagnostic performance across all datasets, achieving AUCs of 0.899 [95% confidence interval (CI): 0.845-0.932] in the training cohort, 0.803 (95%CI: 0.715-0.890) in internal validation, 0.855 (95%CI: 0.775-0.935) in temporal testing, and 0.802 (95%CI: 0.664-0.939) in external testing. These results were significantly superior to radiologists assessments (AUCs: 0.596, 0.529, 0.558, and 0.538, respectively; <i>P</i> < 0.001 by DeLong test). SHAP analysis identified radiomic score, age, tumor size stratification, calcification status, and cystic components as key predictive features. The model exhibited excellent calibration (Brier scores: 0.125-0.144) and provided significant clinical net benefit at decision thresholds exceeding 20%, as evidenced by decision curve analysis. The non-contrast computed tomography-based radiomics-clinical fusion model enables robust preoperative thyroid nodule classification, with SHAP-driven interpretability enhancing its clinical applicability for personalized decision-making.

Comprehensive review of pulmonary embolism imaging: past, present and future innovations in computed tomography (CT) and other diagnostic techniques.

Triggiani S, Pellegrino G, Mortellaro S, Bubba A, Lanza C, Carriero S, Biondetti P, Angileri SA, Fusco R, Granata V, Carrafiello G

pubmed logopapersJun 28 2025
Pulmonary embolism (PE) remains a critical condition that demands rapid and accurate diagnosis, for which computed tomographic pulmonary angiography (CTPA) is widely recognized as the diagnostic gold standard. However, recent advancements in imaging technologies-such as dual-energy computed tomography (DECT), photon-counting CT (PCD-CT), and artificial intelligence (AI)-offer promising enhancements to traditional diagnostic methods. This study reviews past, current and emerging technologies, focusing on their potential to optimize diagnostic accuracy, reduce contrast volumes and radiation doses, and streamline clinical workflows. DECT, with its dual-energy imaging capabilities, enhances image clarity even with lower contrast media volumes, thus reducing patient risk. Meanwhile, PCD-CT has shown potential for dose reduction and superior image resolution, particularly in challenging cases. AI-based tools further augment diagnostic speed and precision by assisting radiologists in image analysis, consequently decreasing workloads and expediting clinical decision-making. Collectively, these innovations hold promise for improved clinical management of PE, enabling not only more accurate diagnoses but also safer, more efficient patient care. Further research is necessary to fully integrate these advancements into routine clinical practice, potentially redefining diagnostic workflows for PE and enhancing patient outcomes.

Comparative analysis of iterative vs AI-based reconstruction algorithms in CT imaging for total body assessment: Objective and subjective clinical analysis.

Tucciariello RM, Botte M, Calice G, Cammarota A, Cammarota F, Capasso M, Nardo GD, Lancellotti MI, Palmese VP, Sarno A, Villonio A, Bianculli A

pubmed logopapersJun 28 2025
This study evaluates the performance of Iterative and AI-based Reconstruction algorithms in CT imaging for brain, chest, and upper abdomen assessments. Using a 320-slice CT scanner, phantom images were analysed through quantitative metrics such as Noise, Contrast-to-Noise-Ratio and Target Transfer Function. Additionally, five radiologists performed subjective evaluations on real patient images by scoring clinical parameters related to anatomical structures across the three body sites. The study aimed to relate results obtained with the typical approach related to parameters involved in medical physics using a Catphan physical phantom, with the evaluations assigned by the radiologists to the clinical parameters chosen in this study, and to determine whether the physical approach alone can ensure the implementation of new procedures and the optimization in clinical practice. AI-based algorithms demonstrated superior performance in chest and abdominal imaging, enhancing parenchymal and vascular detail with notable reductions in noise. However, their performance in brain imaging was less effective, as the aggressive noise reduction led to excessive smoothing, which affected diagnostic interpretability. Iterative reconstruction methods provided balanced results for brain imaging, preserving structural details and maintaining diagnostic clarity. The findings emphasize the need for region-specific optimization of reconstruction protocols. While AI-based methods can complement traditional IR techniques, they should not be assumed to inherently improve outcomes. A critical and cautious introduction of AI-based techniques is essential, ensuring radiologists adapt effectively without compromising diagnostic accuracy.

Prognostic value of body composition out of PSMA-PET/CT in prostate cancer patients undergoing PSMA-therapy.

Roll W, Plagwitz L, Ventura D, Masthoff M, Backhaus C, Varghese J, Rahbar K, Schindler P

pubmed logopapersJun 28 2025
This retrospective study aims to develop a deep learning-based approach to whole-body CT segmentation out of standard PSMA-PET-CT to assess body composition in metastatic castration resistant prostate cancer (mCRPC) patients prior to [<sup>177</sup>Lu]Lu-PSMA radioligand therapy (RLT). Our goal is to go beyond standard PSMA-PET-based pretherapeutic assessment and identify additional body composition metrics out of the CT-component, with potential prognostic value. We used a deep learning segmentation model to perform fully automated segmentation of different tissue compartments, including visceral- (VAT), subcutaneous- (SAT), intra/intermuscular- adipose tissue (IMAT) from [<sup>68</sup> Ga]Ga-PSMA-PET-CT scans of n = 86 prostate cancer patients before RLT. The proportions of different adipose tissue compartments to total adipose tissue (TAT) assessed on a 3D CT-volume of the abdomen or on a 2D single slice basis (centered at third lumbal vertebra (L3)) were compared for their prognostic value. First, univariate and multivariate Cox proportional hazards regression analyses were performed. Subsequently, the subjects were dichotomized at the median tissue composition, and these subgroups were evaluated by Kaplan-Meier analysis with the log-rank test. The automated segmentation model was useful for delineating different adipose tissue compartments and skeletal muscle across different patient anatomies. Analyses revealed significant correlations between lower SAT and higher IMAT ratios and poorer therapeutic outcomes in Cox regression analysis (SAT/TAT: p = 0.038; IMAT/TAT: p < 0.001) in the 3D model. In the single slice approach only IMAT/SAT was significantly associated with survival in Cox regression analysis (p < 0.001; SAT/TAT: p > 0.05). IMAT ratio remained an independent predictor of survival in multivariate analysis when including PSMA-PET and blood-based prognostic factors. In this proof-of-principle study the implementation of a deep learning-based whole-body analysis provides a robust and detailed CT-based assessment of body composition in mCRPC patients undergoing RLT. Potential prognostic parameters have to be corroborated in larger prospective datasets.

HGTL: A hypergraph transfer learning framework for survival prediction of ccRCC.

Han X, Li W, Zhang Y, Li P, Zhu J, Zhang T, Wang R, Gao Y

pubmed logopapersJun 27 2025
The clinical diagnosis of clear cell renal cell carcinoma (ccRCC) primarily depends on histopathological analysis and computed tomography (CT). Although pathological diagnosis is regarded as the gold standard, invasive procedures such as biopsy carry the risk of tumor dissemination. Conversely, CT scanning offers a non-invasive alternative, but its resolution may be inadequate for detecting microscopic tumor features, which limits the performance of prognostic assessments. To address this issue, we propose a high-order correlation-driven method for predicting the survival of ccRCC using only CT images, achieving performance comparable to that of the pathological gold standard. The proposed method utilizes a cross-modal hypergraph neural network based on hypergraph transfer learning to perform high-order correlation modeling and semantic feature extraction from whole-slide pathological images and CT images. By employing multi-kernel maximum mean discrepancy, we transfer the high-order semantic features learned from pathological images to the CT-based hypergraph neural network channel. During the testing phase, high-precision survival predictions were achieved using only CT images, eliminating the need for pathological images. This approach not only reduces the risks associated with invasive examinations for patients but also significantly enhances clinical diagnostic efficiency. The proposed method was validated using four datasets: three collected from different hospitals and one from the public TCGA dataset. Experimental results indicate that the proposed method achieves higher concordance indices across all datasets compared to other methods.

Pulmonary hypertension: diagnostic aspects-what is the role of imaging?

Ali HJ, Guha A

pubmed logopapersJun 27 2025
The role of imaging in diagnosis of pulmonary hypertension is multifaceted, spanning from estimation of pulmonary arterial pressures, understanding pulmonary artery-right ventricular interaction, and identification of the cause. The purpose of this review is to provide a comprehensive overview of multimodality imaging in evaluation of pulmonary hypertension as well as the novel applications of imaging techniques that have improved our detection and understanding of pulmonary hypertension. There are diverse imaging modalities available for comprehensive assessment of pulmonary hypertension that are expanding with new tracers (e.g., hyperpolarized xenon gas, 129Xe) and imaging techniques (C-arm cone-bean computed tomography). Artificial intelligence applications may improve efficiency and accuracy of screening for pulmonary hypertension as well as further characterize pulmonary vasculopathies using computed tomography of the chest. In the face of increasing imaging options, a "value-based imaging" approach should be adopted to reduce unnecessary burden on the patient and the healthcare system without compromising the accuracy and completeness of diagnostic assessment. Future studies are needed to optimize use of multimodality imaging and artificial intelligence in comprehensive evaluation of patients with pulmonary hypertension.

Quantifying Sagittal Craniosynostosis Severity: A Machine Learning Approach With CranioRate.

Tao W, Somorin TJ, Kueper J, Dixon A, Kass N, Khan N, Iyer K, Wagoner J, Rogers A, Whitaker R, Elhabian S, Goldstein JA

pubmed logopapersJun 27 2025
ObjectiveTo develop and validate machine learning (ML) models for objective and comprehensive quantification of sagittal craniosynostosis (SCS) severity, enhancing clinical assessment, management, and research.DesignA cross-sectional study that combined the analysis of computed tomography (CT) scans and expert ratings.SettingThe study was conducted at a children's hospital and a major computer imaging institution. Our survey collected expert ratings from participating surgeons.ParticipantsThe study included 195 patients with nonsyndromic SCS, 221 patients with nonsyndromic metopic craniosynostosis (CS), and 178 age-matched controls. Fifty-four craniofacial surgeons participated in rating 20 patients head CT scans.InterventionsComputed tomography scans for cranial morphology assessment and a radiographic diagnosis of nonsyndromic SCS.Main OutcomesAccuracy of the proposed Sagittal Severity Score (SSS) in predicting expert ratings compared to cephalic index (CI). Secondary outcomes compared Likert ratings with SCS status, the predictive power of skull-based versus skin-based landmarks, and assessments of an unsupervised ML model, the Cranial Morphology Deviation (CMD), as an alternative without ratings.ResultsThe SSS achieved significantly higher accuracy in predicting expert responses than CI (<i>P</i> < .05). Likert ratings outperformed SCS status in supervising ML models to quantify within-group variations. Skin-based landmarks demonstrated equivalent predictive power as skull landmarks (<i>P</i> < .05, threshold 0.02). The CMD demonstrated a strong correlation with the SSS (Pearson coefficient: 0.92, Spearman coefficient: 0.90, <i>P</i> < .01).ConclusionsThe SSS and CMD can provide accurate, consistent, and comprehensive quantification of SCS severity. Implementing these data-driven ML models can significantly advance CS care through standardized assessments, enhanced precision, and informed surgical planning.
Page 88 of 1421416 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.