Sort by:
Page 24 of 1411403 results

A Physics-ASIC Architecture-Driven Deep Learning Photon-Counting Detector Model Under Limited Data.

Yu X, Wu Q, Qin W, Zhong T, Su M, Ma J, Zhang Y, Ji X, Wang W, Quan G, Du Y, Chen Y, Lai X

pubmed logopapersSep 4 2025
Photon-counting computed tomography (PCCT) based on photon-counting detectors (PCDs) represents a cutting-edge CT technology, offering higher spatial resolution, reduced radiation dose, and advanced material decomposition capabilities. Accurately modeling complex and nonlinear PCDs under limited calibration data becomes one of the challenges hindering the widespread accessibility of PCCT. This paper introduces a physics-ASIC architecture-driven deep learning detector model for PCDs. This model adeptly captures the comprehensive response of the PCD, encompassing both sensor and ASIC responses. We present experimental results demonstrating the model's exceptional accuracy and robustness with limited calibration data. Key advancements include reduced calibration errors, reasonable physics-ASIC parameters estimation, and high-quality and high-accuracy material decomposition images.

Deep Learning for Segmenting Ischemic Stroke Infarction in Non-contrast CT Scans by Utilizing Asymmetry.

Sun J, Ju GL, Qu YH, Xie HH, Sun HX, Han SY, Li YF, Jia XQ, Yang Q

pubmed logopapersSep 4 2025
Non-contrast computed tomography (NCCT) is a first-line imaging technique for determining treatment options for acute ischemic stroke (AIS). However, its poor contrast and signal-to-noise ratio limit the diagnosis accuracy for radiologists, and automated AIS lesion segmentation using NCCT also remains a challenge. This study aims to develop a segmentation method for ischemic lesions in NCCT scans, combining symmetry-based principles with the nnUNet segmentation model. Our novel approach integrates a Generative Module (GM) utilizing 2.5 D ResUNet and an Upstream Segmentation Module (UM) with additional inputs and constraints under the 3D nnUNet segmentation model, utilizing symmetry-based learning to enhance the identification and segmentation of ischemic regions. We utilized the publicly accessible AISD dataset for our experiments. This dataset contains 397 NCCT scans of acute ischemic stroke taken within 24 h of the onset of symptoms. Our method was trained and validated using 345 scans, while the remaining 52 scans were used for internal testing. Additionally, we included 60 positive cases (External Set 1) with segmentation labels obtained from our hospital for external validation of the segmentation task. External Set 2 was employed to evaluate the model's sensitivity and specificity in case-dimensional classification, further assessing its clinical performance. We introduced innovative features such as an intensity-based lesion probability (ILP) function and specific input channels for suspected lesion areas to augment the model's sensitivity and specificity. The methodology demonstrated commendable segmentation efficacy, attaining a Dice Similarity Coefficient (DSC) of 0.6720 and a Hausdorff Distance (HD95) of 35.28 on the internal test dataset. Similarly, on the external test dataset, the method yielded satisfactory segmentation outcomes, with a DSC of 0.4891 and an HD 95 of 46.06. These metrics reflect a substantial overlap with expert-drawn boundaries and demonstrate the model's potential for reliable clinical application. In terms of classification performance, the method achieved an Area Under the Curve (AUC) of 0.991 on the external test set, surpassing the performance of nnUNet, which recorded an AUC of 0.947. This study introduces a novel segmentation technique for ischemic lesions in NCCT scans, leveraging symmetry-based principles integrated with nnUNet, which shows potential for improving clinical decision-making in stroke care.

Lung lobe segmentation: performance of open-source MOOSE, TotalSegmentator, and LungMask models compared to a local in-house model.

Amini E, Klein R

pubmed logopapersSep 4 2025
Lung lobe segmentation is required to assess lobar function with nuclear imaging before surgical interventions. We evaluated the performance of open-source deep learning-based lung lobe segmentation tools, compared to a similar nnU-Net model trained on a smaller but more representative clinical dataset. We collated and semi-automatically segmented an internal dataset of 164 computed tomography scans and classified them for task difficulty as easy, moderate, or hard. The performance of three open-source models-multi-organ objective segmentation (MOOSE), TotalSegmentator, and LungMask-was assessed using Dice similarity coefficient (DSC), robust Hausdorff distance (rHd95), and normalized surface distance (NSD). Additionally, we trained, validated, and tested an nnU-Net model using our local dataset and compared its performance with that of the other software on the test subset. All models were evaluated for generalizability using an external competition (LOLA11, n = 55). TotalSegmentator outperformed MOOSE in DSC and NSD across all difficulty levels (p < 0.001), but not in rHd95 (p = 1.000). MOOSE and TotalSegmentator surpassed LungMask across metrics and difficulty classes (p < 0.001). Our model exceeded all other models on the internal dataset (n = 33) in all metrics, across all difficulty classes (p < 0.001), and on the external dataset. Missing lobes were correctly identified only by our model and LungMask in 3 and 1 of 7 cases, respectively. Open-source segmentation tools perform well in straightforward cases but struggle in unfamiliar, complex cases. Training on diverse, specialized datasets can improve generalizability, emphasizing representative data over sheer quantity. Training lung lobe segmentation models on a local variety of cases improves accuracy, thus enhancing presurgical planning, ventilation-perfusion analysis, and disease localization, potentially impacting treatment decisions and patient outcomes in respiratory and thoracic care. Deep learning models trained on non-specialized datasets struggle with complex lung anomalies, yet their real-world limitations are insufficiently assessed. Training an identical model on a smaller yet clinically diverse and representative cohort improved performance in challenging cases. Data diversity outweighs the quantity in deep learning-based segmentation models. Accurate lung lobe segmentation may enhance presurgical assessment of lung lobar ventilation and perfusion function, optimizing clinical decision-making and patient outcomes.

A Cascaded Segmentation-Classification Deep Learning Framework for Preoperative Prediction of Occult Peritoneal Metastasis and Early Recurrence in Advanced Gastric Cancer.

Zou T, Chen P, Wang T, Lei T, Chen X, Yang F, Lin X, Li S, Yi X, Zheng L, Lin Y, Zheng B, Song J, Wang L

pubmed logopapersSep 4 2025
To develop a cascaded deep learning (DL) framework integrating tumor segmentation with metastatic risk stratification for preoperative prediction of occult peritoneal metastasis (OPM) in advanced gastric cancer (GC), and validate its generalizability for early peritoneal recurrence (PR) prediction. This multicenter study enrolled 765 patients with advanced GC from three institutions. We developed a two-stage framework as follows: (1) V-Net-based tumor segmentation on CT; (2) DL-based metastatic risk classification using segmented tumor regions. Clinicopathological predictors were integrated with deep learning probabilities to construct a combined model. Validation cohorts comprised: Internal validation (Test1 for OPM, n=168; Test2 for early PR, n=212) and External validation (Test3 for early PR, n=57 from two independent centers). Multivariable analysis identified Borrmann type (OR=1.314, 95% CI: 1.239-1.394), CA125 ≥35U/mL (OR=1.301, 95% CI: 1.127-1.499), and CT-N+ stage (OR=1.259, 95% CI: 1.124-1.415) as independent OPM predictors. The combined model demonstrated robust performance for both OPM and early PR prediction: achieving AUCs of 0.938 (Train) and 0.916 (Test1) for OPM with improvements over clinical (∆AUC +0.039-+0.107) and DL-only models (∆AUC +0.044-+0.104), while attaining AUC 0.820-0.825 for early PR (Test2 and Test3) with balanced sensitivity (79.7-88.9%) and specificity (72.4-73.3%). Decision curve analysis confirmed net clinical benefit across clinical thresholds. This CT-based cascaded framework enables reliable preoperative risk stratification for OPM and early PR in advanced GC, potentially refining indications for personalized therapeutic pathways.

Deep Learning Based Multiomics Model for Risk Stratification of Postoperative Distant Metastasis in Colorectal Cancer.

Yao X, Han X, Huang D, Zheng Y, Deng S, Ning X, Yuan L, Ao W

pubmed logopapersSep 4 2025
To develop deep learning-based multiomics models for predicting postoperative distant metastasis (DM) and evaluating survival prognosis in colorectal cancer (CRC) patients. This retrospective study included 521 CRC patients who underwent curative surgery at two centers. Preoperative CT and postoperative hematoxylin-eosin (HE) stained slides were collected. A total of 381 patients from Center 1 were split (7:3) into training and internal validation sets; 140 patients from Center 2 formed the independent external validation set. Patients were grouped based on DM status during follow-up. Radiological and pathological models were constructed using independent imaging and pathological predictors. Deep features were extracted with a ResNet-101 backbone to build deep learning radiomics (DLRS) and deep learning pathomics (DLPS) models. Two integrated models were developed: Nomogram 1 (radiological + DLRS) and Nomogram 2 (pathological + DLPS). CT- reported T (cT) stage (OR=2.00, P=0.006) and CT-reported N (cN) stage (OR=1.63, P=0.023) were identified as independent radiologic predictors for building the radiological model; pN stage (OR=1.91, P=0.003) and perineural invasion (OR=2.07, P=0.030) were identified as pathological predictors for building the pathological model. DLRS and DLPS incorporated 28 and 30 deep features, respectively. In the training set, area under the curve (AUC) for radiological, pathological, DLRS, DLPS, Nomogram 1, and Nomogram 2 models were 0.657, 0.687, 0.931, 0.914, 0.938, and 0.930. DeLong's test showed DLRS, DLPS, and both nomograms significantly outperformed conventional models (P<.05). Kaplan-Meier analysis confirmed effective 3-year disease-free survival (DFS) stratification by the nomograms. Deep learning-based multiomics models provided high accuracy for postoperative DM prediction. Nomogram models enabled reliable DFS risk stratification in CRC patients.

Convolutional neural network application for automated lung cancer detection on chest CT using Google AI Studio.

Aljneibi Z, Almenhali S, Lanca L

pubmed logopapersSep 3 2025
This study aimed to evaluate the diagnostic performance of an artificial intelligence (AI)-enhanced model for detecting lung cancer on computed tomography (CT) images of the chest. It assessed diagnostic accuracy, sensitivity, specificity, and interpretative consistency across normal, benign, and malignant cases. An exploratory analysis was performed using the publicly available IQ-OTH/NCCD dataset, comprising 110 CT cases (55 normal, 15 benign, 40 malignant). A pre-trained convolutional neural network in Google AI Studio was fine-tuned using 25 training images and tested on a separate image from each case. Quantitative evaluation of diagnostic accuracy and qualitative content analysis of AI-generated reports was conducted to assess diagnostic patterns and interpretative behavior. The AI model achieved an overall accuracy of 75.5 %, with a sensitivity of 74.5 % and specificity of 76.4 %. The area under the ROC curve (AUC) for all cases was 0.824 (95 % CI: 0.745-0.897), indicating strong discriminative power. Malignant cases had the highest classification performance (AUC = 0.902), while benign cases were more challenging to classify (AUC = 0.615). Qualitative analysis showed the AI used consistent radiological terminology, but demonstrated oversensitivity to ground-glass opacities, contributing to false positives in non-malignant cases. The AI model showed promising diagnostic potential, particularly in identifying malignancies. However, specificity limitations and interpretative errors in benign and normal cases underscore the need for human oversight and continued model refinement. AI-enhanced CT interpretation can improve efficiency in high-volume settings but should serve as a decision-support tool rather than a replacement for expert image review.

Multi-task deep learning for automatic image segmentation and treatment response assessment in metastatic ovarian cancer.

Drury B, Machado IP, Gao Z, Buddenkotte T, Mahani G, Funingana G, Reinius M, McCague C, Woitek R, Sahdev A, Sala E, Brenton JD, Crispin-Ortuzar M

pubmed logopapersSep 3 2025
 : High-grade serous ovarian carcinoma (HGSOC) is characterised by significant spatial and temporal heterogeneity, often presenting at an advanced metastatic stage. One of the most common treatment approaches involves neoadjuvant chemotherapy (NACT), followed by surgery. However, the multi-scale complexity of HGSOC poses a major challenge in evaluating response to NACT.  : Here, we present a multi-task deep learning approach that facilitates simultaneous segmentation of pelvic/ovarian and omental lesions in contrast-enhanced computerised tomography (CE-CT) scans, as well as treatment response assessment in metastatic ovarian cancer. The model combines multi-scale feature representations from two identical U-Net architectures, allowing for an in-depth comparison of CE-CT scans acquired before and after treatment. The network was trained using 198 CE-CT images of 99 ovarian cancer patients for predicting segmentation masks and evaluating treatment response.  : It achieves an AUC of 0.78 (95% CI [0.70-0.91]) in an independent cohort of 98 scans of 49 ovarian cancer patients from a different institution. In addition to the classification performance, the segmentation Dice scores are only slightly lower than the current state-of-the-art for HGSOC segmentation.  : This work is the first to demonstrate the feasibility of a multi-task deep learning approach in assessing chemotherapy-induced tumour changes across the main disease burden of patients with complex multi-site HGSOC, which could be used for treatment response evaluation and disease monitoring.

A review of image processing and analysis of computed tomography images using deep learning methods.

Anderson D, Ramachandran P, Trapp J, Fielding A

pubmed logopapersSep 3 2025
The use of machine learning has seen extraordinary growth since the development of deep learning techniques, notably the deep artificial neural network. Deep learning methodology excels in addressing complicated problems such as image classification, object detection, and natural language processing. A key feature of these networks is the capability to extract useful patterns from vast quantities of complex data, including images. As many branches of healthcare revolves around the generation, processing, and analysis of images, these techniques have become increasingly commonplace. This is especially true for radiotherapy, which relies on the use of anatomical and functional images from a range of imaging modalities, such as Computed Tomography (CT). The aim of this review is to provide an understanding of deep learning methodologies, including neural network types and structure, as well as linking these general concepts to medical CT image processing for radiotherapy. Specifically, it focusses on the stages of enhancement and analysis, incorporating image denoising, super-resolution, generation, registration, and segmentation, supported by examples of recent literature.

Disentangled deep learning method for interior tomographic reconstruction of low-dose X-ray CT.

Chen C, Zhang L, Gao H, Wang Z, Xing Y, Chen Z

pubmed logopapersSep 3 2025
Objective&#xD;Low-dose interior tomography integrates low-dose CT (LDCT) with region-of-interest (ROI) imaging which finds wide application in radiation dose reduction and high-resolution imaging. However, the combined effects of noise and data truncation pose great challenges for accurate tomographic reconstruction. This study aims to develop a novel reconstruction framework that achieves high-quality ROI reconstruction and efficient extension of recoverable region to provide innovative solutions to address coupled ill-posed problems.&#xD;Approach&#xD;We conducted a comprehensive analysis of projection data composition and angular sampling patterns in low-dose interior tomography. Based on this analysis, we proposed two novel deep learning-based reconstruction pipelines: (1) Deep Projection Extraction-based Reconstruction (DPER) that focuses on ROI reconstruction by disentangling and extracting noise and background projection contributions using a dual-domain deep neural network; and (2) DPER with Progressive extension (DPER-Pro) that enhances DPER by a progressive "coarse-to-fine" strategy for missing data compensation, enabling simultaneous ROI reconstruction and extension of recoverable regions. The proposed methods were rigorously evaluated through extensive experiments on simulated torso datasets and real CT scans of a torso phantom.&#xD;Main Results&#xD;The experimental results demonstrated that DPER effectively handles the coupled ill-posed problem and achieves high-quality ROI reconstructions by accurately extracting noise and background projections. DPER-Pro extends the recoverable region while preserving ROI image quality by leveraging disentangled projection components and angular sampling patterns. Both methods outperform competing approaches in reconstructing reliable structures, enhancing generalization, and mitigating noise and truncation artifacts.&#xD;Significance&#xD;This work presents a novel decoupled deep learning framework for low-dose interior tomography that provides a robust and effective solution to the challenges posed by noise and truncated projections. The proposed methods significantly improve ROI reconstruction quality while efficiently recovering structural information in exterior regions, offering a promising pathway for advancing low-dose ROI imaging across a wide range of applications.&#xD.

Learning functions through Diffusion Maps

Alvaro Almeida Gomez

arxiv logopreprintSep 3 2025
We propose a data-driven method for approximating real-valued functions on smooth manifolds, building on the Diffusion Maps framework under the manifold hypothesis. Given pointwise evaluations of a function, the method constructs a smooth extension to the ambient space by exploiting diffusion geometry and its connection to the heat equation and the Laplace-Beltrami operator. To address the computational challenges of high-dimensional data, we introduce a dimensionality reduction strategy based on the low-rank structure of the distance matrix, revealed via singular value decomposition (SVD). In addition, we develop an online updating mechanism that enables efficient incorporation of new data, thereby improving scalability and reducing computational cost. Numerical experiments, including applications to sparse CT reconstruction, demonstrate that the proposed methodology outperforms classical feedforward neural networks and interpolation methods in terms of both accuracy and efficiency.
Page 24 of 1411403 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.