Sort by:
Page 10 of 1391390 results

Pathomics-based machine learning models for optimizing LungPro navigational bronchoscopy in peripheral lung lesion diagnosis: a retrospective study.

Ying F, Bao Y, Ma X, Tan Y, Li S

pubmed logopapersSep 26 2025
To construct a pathomics-based machine learning model to enhance the diagnostic efficacy of LungPro navigational bronchoscopy for peripheral pulmonary lesions and to optimize the management strategy for LungPro-diagnosed negative lesions. Clinical data and hematoxylin and eosin (H&E)-stained whole slide images (WSIs) were collected from 144 consecutive patients undergoing LungPro virtual bronchoscopy at a single institution between January 2022 and December 2023. Patients were stratified into diagnosis-positive and diagnosis-negative cohorts based on histopathological or etiological confirmation. An artificial intelligence (AI) model was developed and validated using 94 diagnosis-positive cases. Logistic regression (LR) identified associations between clinical/imaging characteristics and malignant pulmonary lesion risk factors. We implemented a convolutional neural network (CNN) with weakly supervised learning to extract image-level features, followed by multiple instance learning (MIL) for patient-level feature aggregation. Multiple machine learning (ML) algorithms were applied to model the extracted features. A multimodal diagnostic framework integrating clinical, imaging, and pathomics data were subsequently developed and evaluated on 50 LungPro-negative patients to assess the framework's diagnostic performance and predictive validity. Univariable and multivariable logistic regression analyses identified that age, lesion boundary and mean computed tomography (CT) attenuation were independent risk factors for malignant peripheral pulmonary lesions (P < 0.05). A histopathological model using a MIL fusion strategy showed strong diagnostic performance for lung cancer, with area under the curve (AUC) values of 0.792 (95% CI 0.680-0.903) in the training cohort and 0.777 (95% CI 0.531-1.000) in the test cohort. Combining predictive clinical features with pathological characteristics enhanced diagnostic yield for peripheral pulmonary lesions to 0.848 (95% CI 0.6945-1.0000). In patients with initially negative LungPro biopsy results, the model identified 20 of 28 malignant lesions (sensitivity: 71.43%) and 15 of 22 benign lesions (specificity: 68.18%). Class activation mapping (CAM) validated the model by highlighting key malignant features, including conspicuous nucleoli and nuclear atypia. The fusion diagnostic model that incorporates clinical and pathomic features markedly enhances the diagnostic accuracy of LungPro in this retrospective cohort. This model aids in the detection of subtle malignant characteristics, thereby offering evidence to support precise and targeted therapeutic interventions for lesions that LungPro classifies as negative in clinical settings.

Hemorica: A Comprehensive CT Scan Dataset for Automated Brain Hemorrhage Classification, Segmentation, and Detection

Kasra Davoodi, Mohammad Hoseyni, Javad Khoramdel, Reza Barati, Reihaneh Mortazavi, Amirhossein Nikoofard, Mahdi Aliyari-Shoorehdeli, Jaber Hatam Parikhan

arxiv logopreprintSep 26 2025
Timely diagnosis of Intracranial hemorrhage (ICH) on Computed Tomography (CT) scans remains a clinical priority, yet the development of robust Artificial Intelligence (AI) solutions is still hindered by fragmented public data. To close this gap, we introduce Hemorica, a publicly available collection of 372 head CT examinations acquired between 2012 and 2024. Each scan has been exhaustively annotated for five ICH subtypes-epidural (EPH), subdural (SDH), subarachnoid (SAH), intraparenchymal (IPH), and intraventricular (IVH)-yielding patient-wise and slice-wise classification labels, subtype-specific bounding boxes, two-dimensional pixel masks and three-dimensional voxel masks. A double-reading workflow, preceded by a pilot consensus phase and supported by neurosurgeon adjudication, maintained low inter-rater variability. Comprehensive statistical analysis confirms the clinical realism of the dataset. To establish reference baselines, standard convolutional and transformer architectures were fine-tuned for binary slice classification and hemorrhage segmentation. With only minimal fine-tuning, lightweight models such as MobileViT-XS achieved an F1 score of 87.8% in binary classification, whereas a U-Net with a DenseNet161 encoder reached a Dice score of 85.5% for binary lesion segmentation that validate both the quality of the annotations and the sufficiency of the sample size. Hemorica therefore offers a unified, fine-grained benchmark that supports multi-task and curriculum learning, facilitates transfer to larger but weakly labelled cohorts, and facilitates the process of designing an AI-based assistant for ICH detection and quantification systems.

Automatic Body Region Classification in CT Scans Using Deep Learning.

Golzan M, Lee H, Ngatched TMN, Zhang L, Michalak M, Chow V, Beg MF, Popuri K

pubmed logopapersSep 26 2025
Accurate classification of anatomical regions in computed tomography (CT) scans is essential for optimizing downstream diagnostic and analytic workflows in medical imaging. We demonstrate the high performance that deep learning (DL) algorithms can achieve in the classification of whole-body parts in CT images acquired under various protocols. Our model was trained using a dataset consisting of 5485 anonymized neuroimaging informatics technology initiative (NIFTI) CT scans collected from 45 different health centers. The dataset was split into 3290 scans for training, 1097 scans for validation, and 1098 scans for testing. Each body CT scan was classified into six distinct classes covering the whole body: chest, abdomen, pelvis, chest and abdomen, abdomen and pelvis, and chest and abdomen and pelvis. The performance of the DL model stood at an accuracy, precision, recall, and F1-score of 97.53% (95% CI: 96.62%, 98.45%), 97.56% (95% CI: 96.6%, 98.4%), 97.6% (95% CI: 96.7%, 98.5%), and 97.56% (96.6%, 98.4%), respectively, in identifying different body parts. These findings demonstrate the strength of our approach in annotating CT images through a wide variation in both acquisition protocols and patient demographics. This study underlines the potential that DL holds for medical imaging and, in particular, for the automation of body region classification in CT. Our findings confirm that these models could be implemented in clinical routines to improve diagnostic efficiency and harmony.

EqDiff-CT: Equivariant Conditional Diffusion model for CT Image Synthesis from CBCT

Alzahra Altalib, Chunhui Li, Alessandro Perelli

arxiv logopreprintSep 26 2025
Cone-beam computed tomography (CBCT) is widely used for image-guided radiotherapy (IGRT). It provides real time visualization at low cost and dose. However, photon scattering and beam hindrance cause artifacts in CBCT. These include inaccurate Hounsfield Units (HU), reducing reliability for dose calculation, and adaptive planning. By contrast, computed tomography (CT) offers better image quality and accurate HU calibration but is usually acquired offline and fails to capture intra-treatment anatomical changes. Thus, accurate CBCT-to-CT synthesis is needed to close the imaging-quality gap in adaptive radiotherapy workflows. To cater to this, we propose a novel diffusion-based conditional generative model, coined EqDiff-CT, to synthesize high-quality CT images from CBCT. EqDiff-CT employs a denoising diffusion probabilistic model (DDPM) to iteratively inject noise and learn latent representations that enable reconstruction of anatomically consistent CT images. A group-equivariant conditional U-Net backbone, implemented with e2cnn steerable layers, enforces rotational equivariance (cyclic C4 symmetry), helping preserve fine structural details while minimizing noise and artifacts. The system was trained and validated on the SynthRAD2025 dataset, comprising CBCT-CT scans across multiple head-and-neck anatomical sites, and we compared it with advanced methods such as CycleGAN and DDPM. EqDiff-CT provided substantial gains in structural fidelity, HU accuracy and quantitative metrics. Visual findings further confirm the improved recovery, sharper soft tissue boundaries, and realistic bone reconstructions. The findings suggest that the diffusion model has offered a robust and generalizable framework for CBCT improvements. The proposed solution helps in improving the image quality as well as the clinical confidence in the CBCT-guided treatment planning and dose calculations.

Deep learning-based artefact reduction in low-dose dental cone beam computed tomography with high-attenuation materials.

Park HS, Jeon K, Seo JK

pubmed logopapersSep 25 2025
This paper examines the current challenges in computed tomography (CT), with a critical exploration of existing methodologies from a mathematical perspective. Specifically, it aims to identify research directions to enhance image quality in low-dose, cost-effective cone beam CT (CBCT) systems, which have recently gained widespread use in general dental clinics. Dental CBCT offers a substantial cost advantage over standard medical CT, making it affordable for local dental practices; however, this affordability brings significant challenges related to image quality degradation, further complicated by the presence of metallic implants, which are particularly common in older patients. This paper investigates metal-induced artefacts stemming from mismatches in the forward model used in conventional reconstruction methods and explains an alternative approach that bypasses the traditional Radon transform model. Additionally, it examines both the potential and limitations of deep learning-based methods in tackling these challenges, offering insights into their effectiveness in improving image quality in low-dose dental CBCT.This article is part of the theme issue 'Frontiers of applied inverse problems in science and engineering'.

Deep learning-based segmentation of acute pulmonary embolism in cardiac CT images.

Amini E, Hille G, Hürtgen J, Surov A, Saalfeld S

pubmed logopapersSep 25 2025
Acute pulmonary embolism (APE) is a common pulmonary condition that, in severe cases, can progress to right ventricular hypertrophy and failure, making it a critical health concern surpassed in severity only by myocardial infarction and sudden death. CT pulmonary angiogram (CTPA) is a standard diagnostic tool for detecting APE. However, for treatment planning and prognosis of patient outcome, an accurate assessment of individual APEs is required. Within this study, we compiled and prepared a dataset of 200 CTPA image volumes of patients with APE. We then adapted two state-of-the-art neural networks; the nnU-Net and the transformer-based VT-UNet in order to provide fully automatic APE segmentations. The nnU-Net demonstrated robust performance, achieving an average Dice similarity coefficient (DSC) of 88.25 ± 10.19% and an average 95th percentile Hausdorff distance (HD95) of 10.57 ± 34.56 mm across the validation sets in a five-fold cross-validation framework. In comparison, the VT-UNet was achieving on par accuracies with an average DSC of 87.90 ± 10.94% and a mean HD95 of 10.77 ± 34.19 mm. We applied two state-of-the-art networks for automatic APE segmentation to our compiled CTPA dataset and achieved superior experimental results compared to the current state of the art. In clinical routine, accurate APE segmentations can be used for enhanced patient prognosis and treatment planning.

Variational autoencoder-based deep learning and radiomics for predicting pathologic complete response to neoadjuvant chemoimmunotherapy in locally advanced esophageal squamous cell carcinoma.

Gu Q, Chen S, Dekker A, Wee L, Kalendralis P, Yan M, Wang J, Yuan J, Jiang Y

pubmed logopapersSep 25 2025
Neoadjuvant chemoimmunotherapy (nCIT) is gradually becoming an important treatment strategy for patients with locally advanced esophageal squamous cell carcinoma (LA-ESCC). This study aimed to predict the pathological complete response (pCR) of these patients using variational autoencoder (VAE)-based deep learning and radiomics technology. A total of 253 LA-ESCC patients who were treated with nCIT and underwent enhanced CT at our hospital between July 2019 and July 2023 were included in the training cohort. VAE-based deep learning and radiomics were utilized to construct deep learning (DL) models and deep learning radiomics (DLR) models. The models were trained and validated via 5-fold cross-validation among 253 patients. Forty patients were recruited from our institution between August 2023 and August 2024 as the test cohort. The AUCs of DL and DLR model were 0.935 (95% CI: 0.786-0.992) and 0.949 (95% CI: 0.910-0.986) in the validation cohort and 0.839 (95% CI: 0.726-0.853), 0.926 (95% CI: 0.886-0.934) in the test cohort. The performance gap between Precision and Recall of the DLR model was smaller than that of DL model. The F1 scores of the DL and DLR model were 0.726 (95% confidence interval [CI]: 0.476-0.842) and 0.766 (95% CI: 0.625-0.842) in the validation cohort and 0.727 (95% CI: 0.645-0.811), 0.836 (95% CI: 0.820-0.850) in the test cohort. We constructed a DLR model to predict pCR in nCIT treated LA-ESCC patients, which demonstrated superior performance compared to the DL model. We innovatively used VAE-based deep learning and radiomics to construct the DLR model for predicting pCR of LA-ESCC after nCIT.

The identification and severity staging of chronic obstructive pulmonary disease using quantitative CT parameters, radiomics features, and deep learning features.

Feng S, Zhang W, Zhang R, Yang Y, Wang F, Miao C, Chen Z, Yang K, Yao Q, Liang Q, Zhao H, Chen Y, Liang C, Liang X, Chen R, Liang Z

pubmed logopapersSep 25 2025
To evaluate the value of quantitative CT (QCT) parameters, radiomics features, and deep learning (DL) features based on inspiratory and expiratory CT for the identification and severity staging of chronic obstructive pulmonary disease (COPD). This retrospective analysis included 223 COPD patients and 59 healthy controls from the Guangzhou cohort. We stratified the participants into a training cohort and a testing cohort (7:3) and extracted DL features based on VGG-16 method, radiomics features based on pyradiomics package, and QCT parameters based on NeuLungCARE software. The Logistic regression method was employed to construct models for the identification and severity staging of COPD. The Shenzhen cohort was used as the external validation cohort to assess the generalizability of the models. In the COPD identification models, Model 5-B1 (the QCT combined with DL model in biphasic CT) showed the best predictive performance with AUC of 0.920, and 0.897 in testing cohort and external validation cohort, respectively. In the COPD severity staging models, the predictive performance of Model 4-B2 (the model combining QCT with radiomics features in biphasic CT) and Model 5-B2 (the model combining QCT with DL features in biphasic CT was superior to that of the other models. This biphasic CT-based multi-modal approach integrating QCT, radiomics, or DL features offers a clinically valuable tool for COPD identification and severity staging.

A Deep Learning-Based Fully Automated Vertebra Segmentation and Labeling Workflow.

Lu H, Liu M, Yu K, Fang Y, Zhao J, Shi Y

pubmed logopapersSep 25 2025
<b>Aims/Background</b> Spinal disorders, such as herniated discs and scoliosis, are highly prevalent conditions with rising incidence in the aging global population. Accurate analysis of spinal anatomical structures is a critical prerequisite for achieving high-precision positioning with surgical navigation robots. However, traditional manual segmentation methods are limited by issues such as low efficiency and poor consistency. This work aims to develop a fully automated deep learning-based vertebral segmentation and labeling workflow to provide efficient and accurate preoperative analysis support for spine surgery navigation robots. <b>Methods</b> In the localization stage, the You Only Look Once version 7 (YOLOv7) network was utilized to predict the bounding boxes of individual vertebrae on computed tomography (CT) sagittal slices, transforming the 3D localization problem into a 2D one. Subsequently, the Density-Based Spatial Clustering of Applications with Noise (DBSCAN) clustering algorithm was employed to aggregate the 2D detection results into 3D vertebral centers. This approach significantly reduces inference time and enhances localization accuracy. In the segmentation stage, a 3D U-Net model integrated with an attention mechanism was trained using the region of interest (ROI) based on the vertebral center as input, effectively extracting the 3D structural features of vertebrae to achieve precise segmentation. In the labeling stage, a vertebra labeling network was trained by combining deep learning architectures-ResNet and Transformer, which are capable of extracting rich intervertebral features, to obtain the final labeling results through post-processing based on positional logic analysis. To verify the effectiveness of this workflow, experiments were conducted on a dataset comprising 106 spinal CT datasets sourced from various devices, covering a wide range of clinical scenarios. <b>Results</b> The results demonstrate that the method performed excellently in the three key tasks of localization, segmentation, and labeling, with a Mean Localization Error (MLE) of 1.42 mm. The segmentation accuracy metrics included a Dice Similarity Coefficient (DSC) of 0.968 ± 0.014, Intersection over Union (IoU) of 0.879 ± 0.018, Pixel Accuracy (PA) of 0.988 ± 0.005, mean symmetric distance (MSD) of 1.09 ± 0.19 mm, and Hausdorff Distance (HD) of 5.42 ± 2.05 mm. The degree of classification accuracy reached up to 94.36%. <b>Conclusion</b> These quantitative assessments and visualizations confirm the effectiveness of our method (vertebra localization, vertebra segmentation and vertebra labeling), indicating its potential for deployment in spinal surgery navigation robots to provide accurate and efficient preoperative analysis and navigation support for spinal surgeries.

Multimodal text guided network for chest CT pneumonia classification.

Feng Y, Huang G, Ju F, Cui H

pubmed logopapersSep 25 2025
Pneumonia is a prevalent and serious respiratory disease, responsible for a significant number of cases globally. With advancements in deep learning, the automatic diagnosis of pneumonia has attracted significant research attention in medical image classification. However, current methods still face several challenges. First, since lesions are often visible in only a few slices, slice-based classification algorithms may overlook critical spatial contextual information in CT sequences, and slice-level annotations are labor-intensive. Moreover, chest CT sequence-based pneumonia classification algorithms that rely solely on sequence-level coarse-grained labels remain limited, especially in integrating multi-modal information. To address these challenges, we propose a Multi-modal Text-Guided Network (MTGNet) for pneumonia classification using chest CT sequences. In this model, we design a sequential graph pooling network to encode the CT sequences by gradually selecting important slice features to obtain a sequence-level representation. Additionally, a CT description encoder is developed to learn representations from textual reports. To simulate the clinical diagnostic process, we employ multi-modal training and single-modal testing. A modal transfer module is proposed to generate simulated textual features from CT sequences. Cross-modal attention is then employed to fuse the sequence-level and simulated textual representations, thereby enhancing feature learning within the CT sequences by incorporating semantic information from textual descriptions. Furthermore, contrastive learning is applied to learn discriminative features by maximizing the similarity of positive sample pairs and minimizing the similarity of negative sample pairs. Extensive experiments on a self-constructed pneumonia CT sequences dataset demonstrate that the proposed model significantly improves classification performance.
Page 10 of 1391390 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.