Sort by:
Page 129 of 1391390 results

Non-invasive CT based multiregional radiomics for predicting pathologic complete response to preoperative neoadjuvant chemoimmunotherapy in non-small cell lung cancer.

Fan S, Xie J, Zheng S, Wang J, Zhang B, Zhang Z, Wang S, Cui Y, Liu J, Zheng X, Ye Z, Cui X, Yue D

pubmed logopapersMay 19 2025
This study aims to develop and validate a multiregional radiomics model to predict pathological complete response (pCR) to neoadjuvant chemoimmunotherapy in non-small cell lung cancer (NSCLC), and further evaluate the performance of the model in different specific subgroups (N2 stage and anti-PD-1/PD-L1). 216 patients with NSCLC who underwent neoadjuvant chemoimmunotherapy followed by surgical intervention were included and assigned to training and validation sets randomly. From pre-treatment baseline CT, one intratumoral (T) and two peritumoral regions (P<sub>3</sub>: 0-3 mm; P<sub>6</sub>: 0-6 mm) were extracted. Five radiomics models were developed using machine learning algorithms to predict pCR, utilizing selected features from intratumoral (T), peritumoral (P<sub>3</sub>, P<sub>6</sub>), and combined intra- and peritumoral regions (T + P<sub>3</sub>, T + P<sub>6</sub>). Additionally, the predictive efficacy of the optimal model was specifically assessed for patients in the N2 stage and anti-PD-1/PD-L1 subgroups. A total of 51.4 % (111/216) of patients exhibited pCR following neoadjuvant chemoimmunotherapy. Multivariable analysis identified that only the T + P<sub>3</sub> radiomics signature served as independent predictor of pCR (P < 0.001). The multiregional radiomics model (T + P<sub>3</sub>) exhibited superior predictive performance for pCR, achieving an area under the curve (AUC) of 0.75 in the validation cohort. Furthermore, this multiregional model maintained robust predictive accuracy in both N2 stage and anti-PD-1/PD-L1 subgroups, with an AUC of 0.829 and 0.833, respectively. The proposed multiregional radiomics model showed potential in predicting pCR in NSCLC after neoadjuvant chemoimmunotherapy, and demonstrated good predictive performance in different specific subgroups. This capability may assist clinicians in identifying suitable candidates for neoadjuvant chemoimmunotherapy and promote the advancement in precision therapy.

Learning Wavelet-Sparse FDK for 3D Cone-Beam CT Reconstruction

Yipeng Sun, Linda-Sophie Schneider, Chengze Ye, Mingxuan Gu, Siyuan Mei, Siming Bayer, Andreas Maier

arxiv logopreprintMay 19 2025
Cone-Beam Computed Tomography (CBCT) is essential in medical imaging, and the Feldkamp-Davis-Kress (FDK) algorithm is a popular choice for reconstruction due to its efficiency. However, FDK is susceptible to noise and artifacts. While recent deep learning methods offer improved image quality, they often increase computational complexity and lack the interpretability of traditional methods. In this paper, we introduce an enhanced FDK-based neural network that maintains the classical algorithm's interpretability by selectively integrating trainable elements into the cosine weighting and filtering stages. Recognizing the challenge of a large parameter space inherent in 3D CBCT data, we leverage wavelet transformations to create sparse representations of the cosine weights and filters. This strategic sparsification reduces the parameter count by $93.75\%$ without compromising performance, accelerates convergence, and importantly, maintains the inference computational cost equivalent to the classical FDK algorithm. Our method not only ensures volumetric consistency and boosts robustness to noise, but is also designed for straightforward integration into existing CT reconstruction pipelines. This presents a pragmatic enhancement that can benefit clinical applications, particularly in environments with computational limitations.

Diagnosis of early idiopathic pulmonary fibrosis: current status and future perspective.

Wang X, Xia X, Hou Y, Zhang H, Han W, Sun J, Li F

pubmed logopapersMay 19 2025
The standard approach to diagnosing idiopathic pulmonary fibrosis (IPF) includes identifying the usual interstitial pneumonia (UIP) pattern via high resolution computed tomography (HRCT) or lung biopsy and excluding known causes of interstitial lung disease (ILD). However, limitations of manual interpretation of lung imaging, along with other reasons such as lack of relevant knowledge and non-specific symptoms have hindered the timely diagnosis of IPF. This review proposes the definition of early IPF, emphasizes the diagnostic urgency of early IPF, and highlights current diagnostic strategies and future prospects for early IPF. The integration of artificial intelligence (AI), specifically machine learning (ML) and deep learning (DL), is revolutionizing the diagnostic procedure of early IPF by standardizing and accelerating the interpretation of thoracic images. Innovative bronchoscopic techniques such as transbronchial lung cryobiopsy (TBLC), genomic classifier, and endobronchial optical coherence tomography (EB-OCT) provide less invasive diagnostic alternatives. In addition, chest auscultation, serum biomarkers, and susceptibility genes are pivotal for the indication of early diagnosis. Ongoing research is essential for refining diagnostic methods and treatment strategies for early IPF.

A Skull-Adaptive Framework for AI-Based 3D Transcranial Focused Ultrasound Simulation

Vinkle Srivastav, Juliette Puel, Jonathan Vappou, Elijah Van Houten, Paolo Cabras, Nicolas Padoy

arxiv logopreprintMay 19 2025
Transcranial focused ultrasound (tFUS) is an emerging modality for non-invasive brain stimulation and therapeutic intervention, offering millimeter-scale spatial precision and the ability to target deep brain structures. However, the heterogeneous and anisotropic nature of the human skull introduces significant distortions to the propagating ultrasound wavefront, which require time-consuming patient-specific planning and corrections using numerical solvers for accurate targeting. To enable data-driven approaches in this domain, we introduce TFUScapes, the first large-scale, high-resolution dataset of tFUS simulations through anatomically realistic human skulls derived from T1-weighted MRI images. We have developed a scalable simulation engine pipeline using the k-Wave pseudo-spectral solver, where each simulation returns a steady-state pressure field generated by a focused ultrasound transducer placed at realistic scalp locations. In addition to the dataset, we present DeepTFUS, a deep learning model that estimates normalized pressure fields directly from input 3D CT volumes and transducer position. The model extends a U-Net backbone with transducer-aware conditioning, incorporating Fourier-encoded position embeddings and MLP layers to create global transducer embeddings. These embeddings are fused with U-Net encoder features via feature-wise modulation, dynamic convolutions, and cross-attention mechanisms. The model is trained using a combination of spatially weighted and gradient-sensitive loss functions, enabling it to approximate high-fidelity wavefields. The TFUScapes dataset is publicly released to accelerate research at the intersection of computational acoustics, neurotechnology, and deep learning. The project page is available at https://github.com/CAMMA-public/TFUScapes.

Development and Validation an Integrated Deep Learning Model to Assist Eosinophilic Chronic Rhinosinusitis Diagnosis: A Multicenter Study.

Li J, Mao N, Aodeng S, Zhang H, Zhu Z, Wang L, Liu Y, Qi H, Qiao H, Lin Y, Qiu Z, Yang T, Zha Y, Wang X, Wang W, Song X, Lv W

pubmed logopapersMay 19 2025
The assessment of eosinophilic chronic rhinosinusitis (eCRS) lacks accurate non-invasive preoperative prediction methods, relying primarily on invasive histopathological sections. This study aims to use computed tomography (CT) images and clinical parameters to develop an integrated deep learning model for the preoperative identification of eCRS and further explore the biological basis of its predictions. A total of 1098 patients with sinus CT images were included from two hospitals and were divided into training, internal, and external test sets. The region of interest of sinus lesions was manually outlined by an experienced radiologist. We utilized three deep learning models (3D-ResNet, 3D-Xception, and HR-Net) to extract features from CT images and calculate deep learning scores. The clinical signature and deep learning score were inputted into a support vector machine for classification. The receiver operating characteristic curve, sensitivity, specificity, and accuracy were used to evaluate the integrated deep learning model. Additionally, proteomic analysis was performed on 34 patients to explore the biological basis of the model's predictions. The area under the curve of the integrated deep learning model to predict eCRS was 0.851 (95% confidence interval [CI]: 0.77-0.93) and 0.821 (95% CI: 0.78-0.86) in the internal and external test sets. Proteomic analysis revealed that in patients predicted to be eCRS, 594 genes were dysregulated, and some of them were associated with pathways and biological processes such as chemokine signaling pathway. The proposed integrated deep learning model could effectively predict eCRS patients. This study provided a non-invasive way of identifying eCRS to facilitate personalized therapy, which will pave the way toward precision medicine for CRS.

Non-orthogonal kV imaging guided patient position verification in non-coplanar radiation therapy with dataset-free implicit neural representation.

Ye S, Chen Y, Wang S, Xing L, Gao Y

pubmed logopapersMay 19 2025
Cone-beam CT (CBCT) is crucial for patient alignment and target verification in radiation therapy (RT). However, for non-coplanar beams, potential collisions between the treatment couch and the on-board imaging system limit the range that the gantry can be rotated. Limited-angle measurements are often insufficient to generate high-quality volumetric images for image-domain registration, therefore limiting the use of CBCT for position verification. An alternative to image-domain registration is to use a few 2D projections acquired by the onboard kV imager to register with the 3D planning CT for patient position verification, which is referred to as 2D-3D registration. The 2D-3D registration involves converting the 3D volume into a set of digitally reconstructed radiographs (DRRs) expected to be comparable to the acquired 2D projections. The domain gap between the generated DRRs and the acquired projections can happen due to the inaccurate geometry modeling in DRR generation and artifacts in the actual acquisitions. We aim to improve the efficiency and accuracy of the challenging 2D-3D registration problem in non-coplanar RT with limited-angle CBCT scans. We designed an accelerated, dataset-free, and patient-specific 2D-3D registration framework based on an implicit neural representation (INR) network and a composite similarity measure. The INR network consists of a lightweight three-layer multilayer perception followed by average pooling to calculate rigid motion parameters, which are used to transform the original 3D volume to the moving position. The Radon transform and imaging specifications at the moving position are used to generate DRRs with higher accuracy. We designed a composite similarity measure consisting of pixel-wise intensity difference and gradient differences between the generated DRRs and acquired projections to further reduce the impact of their domain gap on registration accuracy. We evaluated the proposed method on both simulation data and real phantom data acquired from a Varian TrueBeam machine. Comparisons with a conventional non-deep-learning registration approach and ablation studies on the composite similarity measure were conducted to demonstrate the efficacy of the proposed method. In the simulation data experiments, two X-ray projections of a head-and-neck image with <math xmlns="http://www.w3.org/1998/Math/MathML"> <semantics><msup><mn>45</mn> <mo>∘</mo></msup> <annotation>${45}^\circ$</annotation></semantics> </math> discrepancy were used for the registration. The accuracy of the registration results was evaluated on experiments set up at four different moving positions with ground-truth moving parameters. The proposed method achieved sub-millimeter accuracy in translations and sub-degree accuracy in rotations. In the phantom experiments, a head-and-neck phantom was scanned at three different positions involving couch translations and rotations. We achieved translation errors of <math xmlns="http://www.w3.org/1998/Math/MathML"> <semantics><mrow><mo><</mo> <mn>2</mn> <mspace></mspace> <mi>mm</mi></mrow> <annotation>$< 2\nobreakspace {\rm mm}$</annotation></semantics> </math> and subdegree accuracy for pitch and roll. Experiments on registration using different numbers of projections with varying angle discrepancies demonstrate the improved accuracy and robustness of the proposed method, compared to both the conventional registration approach and the proposed approach without certain components of the composite similarity measure. We proposed a dataset-free lightweight INR-based registration with a composite similarity measure for the challenging 2D-3D registration problem with limited-angle CBCT scans. Comprehensive evaluations of both simulation data and experimental phantom data demonstrated the efficiency, accuracy, and robustness of the proposed method.

Thymoma habitat segmentation and risk prediction model using CT imaging and K-means clustering.

Liang Z, Li J, He S, Li S, Cai R, Chen C, Zhang Y, Deng B, Wu Y

pubmed logopapersMay 19 2025
Thymomas, though rare, present a wide range of clinical behaviors, from indolent to aggressive forms, making accurate risk stratification crucial for treatment planning. Traditional methods such as histopathology and radiological assessments often lack the ability to capture tumor heterogeneity, which can impact prognosis. Radiomics, combined with machine learning, provides a method to extract and analyze quantitative imaging features, offering the potential to improve tumor classification and risk prediction. By segmenting tumors into distinct habitat zones, it becomes possible to assess intratumoral heterogeneity more effectively. This study employs radiomics and machine learning techniques to enhance thymoma risk prediction, aiming to improve diagnostic consistency and reduce variability in radiologists' assessments. This study aims to identify different habitat zones within thymomas through CT imaging feature analysis and to establish a predictive model to differentiate between high and low-risk thymomas. Additionally, the study explores how this model can assist radiologists. We obtained CT imaging data from 133 patients with thymoma who were treated at the Affiliated Hospital of Guangdong Medical University from 2015 to 2023. Images from the plain scan phase, venous phase, arterial phase, and their differential images (subtracted images) were used. Tumor regions were segmented into three habitat zones using K-Means clustering. Imaging features from each habitat zone were extracted using the PyRadiomics (van Griethuysen, 2017) library. The 28 most distinguishing features were selected through Mann-Whitney U tests (Mann, 1947) and Spearman's correlation analysis (Spearman, 1904). Five predictive models were built using the same machine learning algorithm (Support Vector Machine [SVM]): Habitat1, Habitat2, Habitat3 (trained on features from individual tumor habitat regions), Habitat All (trained on combined features from all regions), and Intra (trained on intratumoral features), and their performances were evaluated for comparison. The models' diagnostic outcomes were compared with the diagnoses of four radiologists (two junior and two experienced physicians). The AUC (area under curve) for habitat zone 1 was 0.818, for habitat zone 2 was 0.732, and for habitat zone 3 was 0.763. The comprehensive model, which combined data from all habitat zones, achieved an AUC of 0.960, outperforming the model based on traditional radiomic features (AUC of 0.720). The model significantly improved the diagnostic accuracy of all four radiologists. The AUCs for junior radiologists 1 and 2 increased from 0.747 and 0.775 to 0.932 and 0.972, respectively, while for experienced radiologists 1 and 2, the AUCs increased from 0.932 and 0.859 to 0.977 and 0.972, respectively. This study successfully identified distinct habitat zones within thymomas through CT imaging feature analysis and developed an efficient predictive model that significantly improved diagnostic accuracy. This model offers a novel tool for risk assessment of thymomas and can aid in guiding clinical decision-making.

Artificial intelligence based pulmonary vessel segmentation: an opportunity for automated three-dimensional planning of lung segmentectomy.

Mank QJ, Thabit A, Maat APWM, Siregar S, Van Walsum T, Kluin J, Sadeghi AH

pubmed logopapersMay 19 2025
This study aimed to develop an automated method for pulmonary artery and vein segmentation in both left and right lungs from computed tomography (CT) images using artificial intelligence (AI). The segmentations were evaluated using PulmoSR software, which provides 3D visualizations of patient-specific anatomy, potentially enhancing a surgeon's understanding of the lung structure. A dataset of 125 CT scans from lung segmentectomy patients at Erasmus MC was used. Manual annotations for pulmonary arteries and veins were created with 3D Slicer. nnU-Net models were trained for both lungs, assessed using Dice score, sensitivity, and specificity. Intraoperative recordings demonstrated clinical applicability. A paired t-test evaluated statistical significance of the differences between automatic and manual segmentations. The nnU-Net model, trained at full 3D resolution, achieved a mean Dice score between 0.91 and 0.92. The mean sensitivity and specificity were: left artery: 0.86 and 0.99, right artery: 0.84 and 0.99, left vein: 0.85 and 0.99, right vein: 0.85 and 0.99. The automatic method reduced segmentation time from ∼1.5 hours to under 5 min. Five cases were evaluated to demonstrate how the segmentations support lung segmentectomy procedures. P-values for Dice scores were all below 0.01, indicating statistical significance. The nnU-Net models successfully performed automatic segmentation of pulmonary arteries and veins in both lungs. When integrated with visualization tools, these automatic segmentations can enhance preoperative and intraoperative planning by providing detailed 3D views of patients anatomy.

CTLformer: A Hybrid Denoising Model Combining Convolutional Layers and Self-Attention for Enhanced CT Image Reconstruction

Zhiting Zheng, Shuqi Wu, Wen Ding

arxiv logopreprintMay 18 2025
Low-dose CT (LDCT) images are often accompanied by significant noise, which negatively impacts image quality and subsequent diagnostic accuracy. To address the challenges of multi-scale feature fusion and diverse noise distribution patterns in LDCT denoising, this paper introduces an innovative model, CTLformer, which combines convolutional structures with transformer architecture. Two key innovations are proposed: a multi-scale attention mechanism and a dynamic attention control mechanism. The multi-scale attention mechanism, implemented through the Token2Token mechanism and self-attention interaction modules, effectively captures both fine details and global structures at different scales, enhancing relevant features and suppressing noise. The dynamic attention control mechanism adapts the attention distribution based on the noise characteristics of the input image, focusing on high-noise regions while preserving details in low-noise areas, thereby enhancing robustness and improving denoising performance. Furthermore, CTLformer integrates convolutional layers for efficient feature extraction and uses overlapping inference to mitigate boundary artifacts, further strengthening its denoising capability. Experimental results on the 2016 National Institutes of Health AAPM Mayo Clinic LDCT Challenge dataset demonstrate that CTLformer significantly outperforms existing methods in both denoising performance and model efficiency, greatly improving the quality of LDCT images. The proposed CTLformer not only provides an efficient solution for LDCT denoising but also shows broad potential in medical image analysis, especially for clinical applications dealing with complex noise patterns.

Deep learning feature-based model for predicting lymphovascular invasion in urothelial carcinoma of bladder using CT images.

Xiao B, Lv Y, Peng C, Wei Z, Xv Q, Lv F, Jiang Q, Liu H, Li F, Xv Y, He Q, Xiao M

pubmed logopapersMay 18 2025
Lymphovascular invasion significantly impacts the prognosis of urothelial carcinoma of the bladder. Traditional lymphovascular invasion detection methods are time-consuming and costly. This study aims to develop a deep learning-based model to preoperatively predict lymphovascular invasion status in urothelial carcinoma of bladder using CT images. Data and CT images of 577 patients across four medical centers were retrospectively collected. The largest tumor slices from the transverse, coronal, and sagittal planes were selected and used to train CNN models (InceptionV3, DenseNet121, ResNet18, ResNet34, ResNet50, and VGG11). Deep learning features were extracted and visualized using Grad-CAM. Principal Component Analysis reduced features to 64. Using the extracted features, Decision Tree, XGBoost, and LightGBM models were trained with 5-fold cross-validation and ensembled in a stacking model. Clinical risk factors were identified through logistic regression analyses and combined with DL scores to enhance lymphovascular invasion prediction accuracy. The ResNet50-based model achieved an AUC of 0.818 in the validation set and 0.708 in the testing set. The combined model showed an AUC of 0.794 in the validation set and 0.767 in the testing set, demonstrating robust performance across diverse data. We developed a robust radiomics model based on deep learning features from CT images to preoperatively predict lymphovascular invasion status in urothelial carcinoma of the bladder. This model offers a non-invasive, cost-effective tool to assist clinicians in personalized treatment planning. We developed a robust radiomics model based on deep learning features from CT images to preoperatively predict lymphovascular invasion status in urothelial carcinoma of the bladder. We developed a deep learning feature-based stacking model to predict lymphovascular invasion in urothelial carcinoma of the bladder patients using CT. Max cross sections from three dimensions of the CT image are used to train the CNN model. We made comparisons across six CNN networks, including ResNet50.
Page 129 of 1391390 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.