Sort by:
Page 11 of 1391390 results

Deep learning-based segmentation of acute pulmonary embolism in cardiac CT images.

Amini E, Hille G, Hürtgen J, Surov A, Saalfeld S

pubmed logopapersSep 25 2025
Acute pulmonary embolism (APE) is a common pulmonary condition that, in severe cases, can progress to right ventricular hypertrophy and failure, making it a critical health concern surpassed in severity only by myocardial infarction and sudden death. CT pulmonary angiogram (CTPA) is a standard diagnostic tool for detecting APE. However, for treatment planning and prognosis of patient outcome, an accurate assessment of individual APEs is required. Within this study, we compiled and prepared a dataset of 200 CTPA image volumes of patients with APE. We then adapted two state-of-the-art neural networks; the nnU-Net and the transformer-based VT-UNet in order to provide fully automatic APE segmentations. The nnU-Net demonstrated robust performance, achieving an average Dice similarity coefficient (DSC) of 88.25 ± 10.19% and an average 95th percentile Hausdorff distance (HD95) of 10.57 ± 34.56 mm across the validation sets in a five-fold cross-validation framework. In comparison, the VT-UNet was achieving on par accuracies with an average DSC of 87.90 ± 10.94% and a mean HD95 of 10.77 ± 34.19 mm. We applied two state-of-the-art networks for automatic APE segmentation to our compiled CTPA dataset and achieved superior experimental results compared to the current state of the art. In clinical routine, accurate APE segmentations can be used for enhanced patient prognosis and treatment planning.

Variational autoencoder-based deep learning and radiomics for predicting pathologic complete response to neoadjuvant chemoimmunotherapy in locally advanced esophageal squamous cell carcinoma.

Gu Q, Chen S, Dekker A, Wee L, Kalendralis P, Yan M, Wang J, Yuan J, Jiang Y

pubmed logopapersSep 25 2025
Neoadjuvant chemoimmunotherapy (nCIT) is gradually becoming an important treatment strategy for patients with locally advanced esophageal squamous cell carcinoma (LA-ESCC). This study aimed to predict the pathological complete response (pCR) of these patients using variational autoencoder (VAE)-based deep learning and radiomics technology. A total of 253 LA-ESCC patients who were treated with nCIT and underwent enhanced CT at our hospital between July 2019 and July 2023 were included in the training cohort. VAE-based deep learning and radiomics were utilized to construct deep learning (DL) models and deep learning radiomics (DLR) models. The models were trained and validated via 5-fold cross-validation among 253 patients. Forty patients were recruited from our institution between August 2023 and August 2024 as the test cohort. The AUCs of DL and DLR model were 0.935 (95% CI: 0.786-0.992) and 0.949 (95% CI: 0.910-0.986) in the validation cohort and 0.839 (95% CI: 0.726-0.853), 0.926 (95% CI: 0.886-0.934) in the test cohort. The performance gap between Precision and Recall of the DLR model was smaller than that of DL model. The F1 scores of the DL and DLR model were 0.726 (95% confidence interval [CI]: 0.476-0.842) and 0.766 (95% CI: 0.625-0.842) in the validation cohort and 0.727 (95% CI: 0.645-0.811), 0.836 (95% CI: 0.820-0.850) in the test cohort. We constructed a DLR model to predict pCR in nCIT treated LA-ESCC patients, which demonstrated superior performance compared to the DL model. We innovatively used VAE-based deep learning and radiomics to construct the DLR model for predicting pCR of LA-ESCC after nCIT.

The identification and severity staging of chronic obstructive pulmonary disease using quantitative CT parameters, radiomics features, and deep learning features.

Feng S, Zhang W, Zhang R, Yang Y, Wang F, Miao C, Chen Z, Yang K, Yao Q, Liang Q, Zhao H, Chen Y, Liang C, Liang X, Chen R, Liang Z

pubmed logopapersSep 25 2025
To evaluate the value of quantitative CT (QCT) parameters, radiomics features, and deep learning (DL) features based on inspiratory and expiratory CT for the identification and severity staging of chronic obstructive pulmonary disease (COPD). This retrospective analysis included 223 COPD patients and 59 healthy controls from the Guangzhou cohort. We stratified the participants into a training cohort and a testing cohort (7:3) and extracted DL features based on VGG-16 method, radiomics features based on pyradiomics package, and QCT parameters based on NeuLungCARE software. The Logistic regression method was employed to construct models for the identification and severity staging of COPD. The Shenzhen cohort was used as the external validation cohort to assess the generalizability of the models. In the COPD identification models, Model 5-B1 (the QCT combined with DL model in biphasic CT) showed the best predictive performance with AUC of 0.920, and 0.897 in testing cohort and external validation cohort, respectively. In the COPD severity staging models, the predictive performance of Model 4-B2 (the model combining QCT with radiomics features in biphasic CT) and Model 5-B2 (the model combining QCT with DL features in biphasic CT was superior to that of the other models. This biphasic CT-based multi-modal approach integrating QCT, radiomics, or DL features offers a clinically valuable tool for COPD identification and severity staging.

An open deep learning-based framework and model for tooth instance segmentation in dental CBCT.

Zhou Y, Xu Y, Khalil B, Nalley A, Tarce M

pubmed logopapersSep 25 2025
Current dental CBCT segmentation tools often lack accuracy, accessibility, or comprehensive anatomical coverage. To address this, we constructed a densely annotated dental CBCT dataset and developed a deep learning model, OraSeg, for tooth-level instance segmentation, which is then deployed as a one-click tool and made freely accessible for non-commercial use. We established a standardized annotated dataset covering 35 key oral anatomical structures and employed UNetR as the backbone network, combining Swin Transformer and the spatial Mamba module for multi-scale residual feature fusion. The OralSeg model was designed and optimized for precise instance segmentation of dental CBCT images, and integrated into the 3D Slicer platform, providing a graphical user interface for one-click segmentation. OralSeg had a Dice similarity coefficient of 0.8316 ± 0.0305 on CBCT instance segmentation compared to SwinUNETR and 3D U-Net. The model significantly improves segmentation performance, especially in complex oral anatomical structures, such as apical areas, alveolar bone margins, and mandibular nerve canals. The OralSeg model presented in this study provides an effective solution for instance segmentation of dental CBCT images. The tool allows clinical dentists and researchers with no AI background to perform one-click segmentation, and may be applicable in various clinical and research contexts. OralSeg can offer researchers and clinicians a user-friendly tool for tooth-level instance segmentation, which may assist in clinical diagnosis, educational training, and research, and contribute to the broader adoption of digital dentistry in precision medicine.

Integrating CT image reconstruction, segmentation, and large language models for enhanced diagnostic insight.

Abbasi AA, Farooqi AH

pubmed logopapersSep 25 2025
Deep learning has significantly advanced medical imaging, particularly computed tomography (CT), which is vital for diagnosing heart and cancer patients, evaluating treatments, and tracking disease progression. High-quality CT images enhance clinical decision-making, making image reconstruction a key research focus. This study develops a framework to improve CT image quality while minimizing reconstruction time. The proposed four-step medical image analysis framework includes reconstruction, preprocessing, segmentation, and image description. Initially, raw projection data undergoes reconstruction via a Radon transform to generate a sinogram, which is then used to construct a CT image of the pelvis. A convolutional neural network (CNN) ensures high-quality reconstruction. A bilateral filter reduces noise while preserving critical anatomical features. If required, a medical expert can review the image. The K-means clustering algorithm segments the preprocessed image, isolating the pelvis and removing irrelevant structures. Finally, the FuseCap model generates an automated textual description to assist radiologists. The framework's effectiveness is evaluated using peak signal-to-noise ratio (PSNR), normalized mean square error (NMSE), and structural similarity index measure (SSIM). The achieved values-PSNR 30.784, NMSE 0.032, and SSIM 0.877-demonstrate superior performance compared to existing methods. The proposed framework reconstructs high-quality CT images from raw projection data, integrating segmentation and automated descriptions to provide a decision-support tool for medical experts. By enhancing image clarity, segmenting outputs, and providing descriptive insights, this research aims to reduce the workload of frontline medical professionals and improve diagnostic efficiency.

An Anisotropic Cross-View Texture Transfer with Multi-Reference Non-Local Attention for CT Slice Interpolation

Kwang-Hyun Uhm, Hyunjun Cho, Sung-Hoo Hong, Seung-Won Jung

arxiv logopreprintSep 24 2025
Computed tomography (CT) is one of the most widely used non-invasive imaging modalities for medical diagnosis. In clinical practice, CT images are usually acquired with large slice thicknesses due to the high cost of memory storage and operation time, resulting in an anisotropic CT volume with much lower inter-slice resolution than in-plane resolution. Since such inconsistent resolution may lead to difficulties in disease diagnosis, deep learning-based volumetric super-resolution methods have been developed to improve inter-slice resolution. Most existing methods conduct single-image super-resolution on the through-plane or synthesize intermediate slices from adjacent slices; however, the anisotropic characteristic of 3D CT volume has not been well explored. In this paper, we propose a novel cross-view texture transfer approach for CT slice interpolation by fully utilizing the anisotropic nature of 3D CT volume. Specifically, we design a unique framework that takes high-resolution in-plane texture details as a reference and transfers them to low-resolution through-plane images. To this end, we introduce a multi-reference non-local attention module that extracts meaningful features for reconstructing through-plane high-frequency details from multiple in-plane images. Through extensive experiments, we demonstrate that our method performs significantly better in CT slice interpolation than existing competing methods on public CT datasets including a real-paired benchmark, verifying the effectiveness of the proposed framework. The source code of this work is available at https://github.com/khuhm/ACVTT.

Scan-do Attitude: Towards Autonomous CT Protocol Management using a Large Language Model Agent

Xingjian Kang, Linda Vorberg, Andreas Maier, Alexander Katzmann, Oliver Taubmann

arxiv logopreprintSep 24 2025
Managing scan protocols in Computed Tomography (CT), which includes adjusting acquisition parameters or configuring reconstructions, as well as selecting postprocessing tools in a patient-specific manner, is time-consuming and requires clinical as well as technical expertise. At the same time, we observe an increasing shortage of skilled workforce in radiology. To address this issue, a Large Language Model (LLM)-based agent framework is proposed to assist with the interpretation and execution of protocol configuration requests given in natural language or a structured, device-independent format, aiming to improve the workflow efficiency and reduce technologists' workload. The agent combines in-context-learning, instruction-following, and structured toolcalling abilities to identify relevant protocol elements and apply accurate modifications. In a systematic evaluation, experimental results indicate that the agent can effectively retrieve protocol components, generate device compatible protocol definition files, and faithfully implement user requests. Despite demonstrating feasibility in principle, the approach faces limitations regarding syntactic and semantic validity due to lack of a unified device API, and challenges with ambiguous or complex requests. In summary, the findings show a clear path towards LLM-based agents for supporting scan protocol management in CT imaging.

Dose reduction in radiotherapy treatment planning CT via deep learning-based reconstruction: a single‑institution study.

Yasui K, Kasugai Y, Morishita M, Saito Y, Shimizu H, Uezono H, Hayashi N

pubmed logopapersSep 24 2025
To quantify radiation dose reduction in radiotherapy treatment-planning CT (RTCT) using a deep learning-based reconstruction (DLR; AiCE) algorithm compared with adaptive iterative dose reduction (IR; AIDR). To evaluate its potential to inform RTCT-specific diagnostic reference levels (DRLs). In this single-institution retrospective study, 4-part RTCT scans (head, head and neck, lung, and pelvis) were acquired on a large-bore CT. Scans reconstructed with IR (n = 820) and DLR (n = 854) were compared. The 75th-percentile CTDI<sub>vol</sub> and DLP (CTDI<sub>IR</sub>, DLP<sub>IR</sub> vs. CTDI<sub>DLR</sub>, DLP<sub>DLR</sub>) were determined per site. Dose reduction rates were calculated as (CTDI<sub>DLR</sub> - CTDI<sub>IR</sub>)/CTDI<sub>IR</sub> × 100% and similarly for DLP. Statistical significance was assessed by the Mann-Whitney U-test. DLR yielded CTDI<sub>vol</sub> reductions of 30.4-75.4% and DLP reductions of 23.1-73.5% across sites (p < 0.001), with the greatest reductions in head and neck RTCT (CTDI<sub>vol</sub>: 75.4%; DLP: 73.5%). Variability also narrowed. Compared with published national DRLs, DLR achieved 34.8 mGy and 18.8 mGy lower CTDI<sub>vol</sub> for head and neck versus UK-DRLs and Japanese multi-institutional data, respectively. DLR substantially lowers RTCT dose indices, providing quantitative data to guide RTCT-specific DRLs and optimize clinical workflows.

Incidental Cardiovascular Findings in Lung Cancer Screening and Noncontrast Chest Computed Tomography.

Cham MD, Shemesh J

pubmed logopapersSep 24 2025
While the primary goal of lung cancer screening CT is to detect early-stage lung cancer in high-risk populations, it often reveals asymptomatic cardiovascular abnormalities that can be clinically significant. These findings include coronary artery calcifications (CACs), myocardial pathologies, cardiac chamber enlargement, valvular lesions, and vascular disease. CAC, a marker of subclinical atherosclerosis, is particularly emphasized due to its strong predictive value for cardiovascular events and mortality. Guidelines recommend qualitative or quantitative CAC scoring on all noncontrast chest CTs. Other actionable findings include aortic aneurysms, pericardial disease, and myocardial pathology, some of which may indicate past or impending cardiac events. This article explores the wide range of incidental cardiovascular findings detectable during low-dose CT (LDCT) scans for lung cancer screening, as well as noncontrast chest CT scans. Distinguishing which findings warrant further evaluation is essential to avoid overdiagnosis, unnecessary anxiety, and resource misuse. The article advocates for a structured approach to follow-up based on the clinical significance of each finding and the patient's overall risk profile. It also notes the rising role of artificial intelligence in automatically detecting and quantifying these abnormalities, potentiating early behavioral modification or medical and surgical interventions. Ultimately, this piece highlights the opportunity to reframe LDCT as a comprehensive cardiothoracic screening tool.

SHMoAReg: Spark Deformable Image Registration via Spatial Heterogeneous Mixture of Experts and Attention Heads

Yuxi Zheng, Jianhui Feng, Tianran Li, Marius Staring, Yuchuan Qiao

arxiv logopreprintSep 24 2025
Encoder-Decoder architectures are widely used in deep learning-based Deformable Image Registration (DIR), where the encoder extracts multi-scale features and the decoder predicts deformation fields by recovering spatial locations. However, current methods lack specialized extraction of features (that are useful for registration) and predict deformation jointly and homogeneously in all three directions. In this paper, we propose a novel expert-guided DIR network with Mixture of Experts (MoE) mechanism applied in both encoder and decoder, named SHMoAReg. Specifically, we incorporate Mixture of Attention heads (MoA) into encoder layers, while Spatial Heterogeneous Mixture of Experts (SHMoE) into the decoder layers. The MoA enhances the specialization of feature extraction by dynamically selecting the optimal combination of attention heads for each image token. Meanwhile, the SHMoE predicts deformation fields heterogeneously in three directions for each voxel using experts with varying kernel sizes. Extensive experiments conducted on two publicly available datasets show consistent improvements over various methods, with a notable increase from 60.58% to 65.58% in Dice score for the abdominal CT dataset. Furthermore, SHMoAReg enhances model interpretability by differentiating experts' utilities across/within different resolution layers. To the best of our knowledge, we are the first to introduce MoE mechanism into DIR tasks. The code will be released soon.
Page 11 of 1391390 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.