Sort by:
Page 68 of 1421416 results

Predicting Thoracolumbar Vertebral Osteoporotic Fractures: Value Assessment of Chest CT-Based Machine Learning.

Chen Y, Che M, Yang H, Yu M, Yang Z, Qin J

pubmed logopapersJul 10 2025
To assess the value of a chest CT-based machine learning model in predicting osteoporotic vertebral fractures (OVFs) of the thoracolumbar vertebral bodies. We monitored 8910 patients aged ≥50 who underwent chest CT (2021-2024), identifying 54 incident OVFs cases. Using propensity score matching, 108 controls were selected. The 162 patients were randomly assigned to training (n=113) and testing (n=49) cohorts. Clinical models were developed through logistic regression. Radiomics features were extracted from the thoracolumbar vertebral bodies (T11-L2), with top 10 features selected via minimum-redundancy maximum-relevancy and the least absolute shrinkage and selection operator to construct a Radscore model. Nomogram model was established combining clinical and radiomics features, evaluated using receiver operating characteristic curves, decision curve analysis (DCA) and calibration plots. Volumetric bone mineral density (vBMD) (OR=0.95, 95%CI=0.93-0.97) and hemoglobin (HGB) (OR=0.96, 95%CI=0.94-0.98) were selected as independent risk factors for clinical model. From 2288 radiomics features, 10 were selected for Radscore calculation. The Nomogram model (Radscore + vBMD + HGB) achieved area under the curve (AUC) of 0.938/0.906 in training/testing cohorts, outperforming both Radscore (AUC=0.902/0.871) and clinical (AUC=0.802/0.820) models. DCA and calibration plots confirmed the Nomogram model's superior prediction capability. Nomogram model combined with radiomics and clinical features has high predictive performance, and its predictive results for thoracolumbar OVFs can provide reference for clinical decision making.

Objective assessment of diagnostic image quality in CT scans: what radiologists and researchers need to know.

Hoeijmakers EJI, Martens B, Wildberger JE, Flohr TG, Jeukens CRLPN

pubmed logopapersJul 10 2025
Quantifying diagnostic image quality (IQ) is not straightforward but essential for optimizing the balance between IQ and radiation dose, and for ensuring consistent high-quality images in CT imaging. This review provides a comprehensive overview of advanced objective reference-free IQ assessment methods for CT scans, beyond standard approaches. A literature search was performed in PubMed and Web of Science up to June 2024 to identify studies using advanced objective image quality methods on clinical CT scans. Only reference-free methods, which do not require a predefined reference image, were included. Traditional methods relying on the standard deviation of the Hounsfield units, the signal-to-noise ratio or contrast-to-noise ratio, all within a manually selected region-of-interest, were excluded. Eligible results were categorized by IQ metric (i.e., noise, contrast, spatial resolution and other) and assessment method (manual, automated, and artificial intelligence (AI)-based). Thirty-five studies were included that proposed or employed reference-free IQ methods, identifying 12 noise assessment methods, 4 contrast assessment methods, 14 spatial resolution assessment methods and 7 others, based on manual, automated or AI-based approaches. This review emphasizes the transition from manual to fully automated approaches for IQ assessment, including the potential of AI-based methods, and it provides a reference tool for researchers and radiologists who need to make a well-considered choice in how to evaluate IQ in CT imaging. This review examines the challenge of quantifying diagnostic CT image quality, essential for optimization studies and ensuring consistent high-quality images, by providing an overview of objective reference-free diagnostic image quality assessment methods beyond standard methods. Quantifying diagnostic CT image quality remains a key challenge. This review summarizes objective diagnostic image quality assessment techniques beyond standard metrics. A decision tree is provided to help select optimal image quality assessment techniques.

Recurrence prediction of invasive ductal carcinoma from preoperative contrast-enhanced computed tomography using deep convolutional neural network.

Umezu M, Kondo Y, Ichikawa S, Sasaki Y, Kaneko K, Ozaki T, Koizumi N, Seki H

pubmed logopapersJul 10 2025
Predicting the risk of breast cancer recurrence is crucial for guiding therapeutic strategies, including enhanced surveillance and the consideration of additional treatment after surgery. In this study, we developed a deep convolutional neural network (DCNN) model to predict recurrence within six years after surgery using preoperative contrast-enhanced computed tomography (CECT) images, which are widely available and effective for detecting distant metastases. This retrospective study included preoperative CECT images from 133 patients with invasive ductal carcinoma. The images were classified into recurrence and no-recurrence groups using ResNet-101 and DenseNet-201. Classification performance was evaluated using the area under the receiver operating curve (AUC) with leave-one-patient-out cross-validation. At the optimal threshold, the classification accuracies for ResNet-101 and DenseNet-201 were 0.73 and 0.72, respectively. The median (interquartile range) AUC of DenseNet-201 (0.70 [0.69-0.72]) was statistically higher than that of ResNet-101 (0.68 [0.66-0.68]) (p < 0.05). These results suggest the potential of preoperative CECT-based DCNN models to predict breast cancer recurrence without the need for additional invasive procedures.

Non-invasive identification of TKI-resistant NSCLC: a multi-model AI approach for predicting EGFR/TP53 co-mutations.

Li J, Xu R, Wang D, Liang Z, Li Y, Wang Q, Bi L, Qi Y, Zhou Y, Li W

pubmed logopapersJul 10 2025
To investigate the value of multi-model based on preoperative CT scans in predicting EGFR/TP53 co-mutation status. We retrospectively included 2171 patients with non-small cell lung cancer (NSCLC) with pre-treatment computed tomography (CT) scans and predicting epidermal growth factor receptor (EGFR) gene sequencing from West China Hospital between January 2013 and April 2024. The deep-learning model was built for predicting EGFR / tumor protein 53 (TP53) co-occurrence status. The model performance was evaluated by area under the curve (AUC) and Kaplan-Meier analysis. We further compared multi-dimension model with three one-dimension models separately, and we explored the value of combining clinical factors with machine-learning factors. Additionally, we investigated 546 patients with 56-panel next-generation sequencing and low-dose computed tomography (LDCT) to explore the biological mechanisms of radiomics. In our cohort of 2171 patients (1,153 males, 1,018 females; median age 60 years), single-dimensional models were developed using data from 1,055 eligible patients. The multi-dimensional model utilizing a Random Forest classifier achieved superior performance, yielding the highest AUC of 0.843 for predicting EGFR/TP53 co-mutations in the test set. The multi-dimensional model demonstrates promising potential for non-invasive prediction of EGFR and TP53 co-mutations, facilitating early and informed clinical decision-making in NSCLC patients at risk of treatment resistance.

FF Swin-Unet: a strategy for automated segmentation and severity scoring of NAFLD.

Fan L, Lei Y, Song F, Sun X, Zhang Z

pubmed logopapersJul 10 2025
Non-alcoholic fatty liver disease (NAFLD) is a significant risk factor for liver cancer and cardiovascular diseases, imposing substantial social and economic burdens. Computed tomography (CT) scans are crucial for diagnosing NAFLD and assessing its severity. However, current manual measurement techniques require considerable human effort and resources from radiologists, and there is a lack of standardized methods for classifying the severity of NAFLD in existing research. To address these challenges, we propose a novel method for NAFLD segmentation and automated severity scoring. The method consists of three key modules: (1) The Semi-automatization nnU-Net Module (SNM) constructs a high-quality dataset by combining manual annotations with semi-automated refinement; (2) The Focal Feature Fusion Swin-Unet Module (FSM) enhances liver and spleen segmentation through multi-scale feature fusion and Swin Transformer-based architectures; (3) The Automated Severity Scoring Module (ASSM) integrates segmentation results with radiological features to classify NAFLD severity. These modules are embedded in a Flask-RESTful API-based system, enabling users to upload abdominal CT data for automated preprocessing, segmentation, and scoring. The Focal Feature Fusion Swin-Unet (FF Swin-Unet) method significantly improves segmentation accuracy, achieving a Dice similarity coefficient (DSC) of 95.64% and a 95th percentile Hausdorff distance (HD95) of 15.94. The accuracy of the automated severity scoring is 90%. With model compression and ONNX deployment, the evaluation speed for each case is approximately 5 seconds. Compared to manual diagnosis, the system can process a large volume of data simultaneously, rapidly, and efficiently while maintaining the same level of diagnostic accuracy, significantly reducing the workload of medical professionals. Our research demonstrates that the proposed system has high accuracy in processing large volumes of CT data and providing automated NAFLD severity scores quickly and efficiently. This method has the potential to significantly reduce the workload of medical professionals and holds immense clinical application potential.

Cross-Modality Masked Learning for Survival Prediction in ICI Treated NSCLC Patients

Qilong Xing, Zikai Song, Bingxin Gong, Lian Yang, Junqing Yu, Wei Yang

arxiv logopreprintJul 9 2025
Accurate prognosis of non-small cell lung cancer (NSCLC) patients undergoing immunotherapy is essential for personalized treatment planning, enabling informed patient decisions, and improving both treatment outcomes and quality of life. However, the lack of large, relevant datasets and effective multi-modal feature fusion strategies pose significant challenges in this domain. To address these challenges, we present a large-scale dataset and introduce a novel framework for multi-modal feature fusion aimed at enhancing the accuracy of survival prediction. The dataset comprises 3D CT images and corresponding clinical records from NSCLC patients treated with immune checkpoint inhibitors (ICI), along with progression-free survival (PFS) and overall survival (OS) data. We further propose a cross-modality masked learning approach for medical feature fusion, consisting of two distinct branches, each tailored to its respective modality: a Slice-Depth Transformer for extracting 3D features from CT images and a graph-based Transformer for learning node features and relationships among clinical variables in tabular data. The fusion process is guided by a masked modality learning strategy, wherein the model utilizes the intact modality to reconstruct missing components. This mechanism improves the integration of modality-specific features, fostering more effective inter-modality relationships and feature interactions. Our approach demonstrates superior performance in multi-modal integration for NSCLC survival prediction, surpassing existing methods and setting a new benchmark for prognostic models in this context.

Airway Segmentation Network for Enhanced Tubular Feature Extraction

Qibiao Wu, Yagang Wang, Qian Zhang

arxiv logopreprintJul 9 2025
Manual annotation of airway regions in computed tomography images is a time-consuming and expertise-dependent task. Automatic airway segmentation is therefore a prerequisite for enabling rapid bronchoscopic navigation and the clinical deployment of bronchoscopic robotic systems. Although convolutional neural network methods have gained considerable attention in airway segmentation, the unique tree-like structure of airways poses challenges for conventional and deformable convolutions, which often fail to focus on fine airway structures, leading to missed segments and discontinuities. To address this issue, this study proposes a novel tubular feature extraction network, named TfeNet. TfeNet introduces a novel direction-aware convolution operation that first applies spatial rotation transformations to adjust the sampling positions of linear convolution kernels. The deformed kernels are then represented as line segments or polylines in 3D space. Furthermore, a tubular feature fusion module (TFFM) is designed based on asymmetric convolution and residual connection strategies, enhancing the network's focus on subtle airway structures. Extensive experiments conducted on one public dataset and two datasets used in airway segmentation challenges demonstrate that the proposed TfeNet achieves more accuracy and continuous airway structure predictions compared with existing methods. In particular, TfeNet achieves the highest overall score of 94.95% on the current largest airway segmentation dataset, Airway Tree Modeling(ATM22), and demonstrates advanced performance on the lung fibrosis dataset(AIIB23). The code is available at https://github.com/QibiaoWu/TfeNet.

Feasibility study of "double-low" scanning protocol combined with artificial intelligence iterative reconstruction algorithm for abdominal computed tomography enhancement in patients with obesity.

Ji MT, Wang RR, Wang Q, Li HS, Zhao YX

pubmed logopapersJul 9 2025
To evaluate the efficacy of the "double-low" scanning protocol combined with the artificial intelligence iterative reconstruction (AIIR) algorithm for abdominal computed tomography (CT) enhancement in obese patients and to identify the optimal AIIR algorithm level. Patients with a body mass index ≥ 30.00 kg/m<sup>2</sup> who underwent abdominal CT enhancement were randomly assigned to groups A or B. Group A underwent conventional protocol with the Karl 3D iterative reconstruction algorithm at levels 3-5. Group B underwent the "double-low" protocol with AIIR algorithm at levels 1-5. Radiation dose, total iodine intake, along with subjective and objective image quality were recorded. The optimal reconstruction levels for arterial-phase and portal-venous-phase images were identified. Comparisons were made in terms of radiation dose, iodine intake, and image quality. Overall, 150 patients with obesity were collected, and each group consisted of 75 cases. Karl 3D level 5 was the optimal algorithm level for group A, while AIIR level 4 was the optimal algorithm level for group B. AIIR level 4 images in group B exhibited significantly superior subjective and objective image quality than those in Karl 3D level 5 images in group A (P < 0.001). Group B showed reductions in mean CT dose index values, dose-length product, size-specific dose estimate based on water-equivalent diameter, and total iodine intake, compared with group A (P < 0.001). The "double-low" scanning protocol combined with the AIIR algorithm significantly reduces radiation dose and iodine intake during abdominal CT enhancement in obese patients. AIIR level 4 is the optimal reconstruction level for arterial-phase and portal-venous-phase in this patient population.

A machine learning model reveals invisible microscopic variation in acute ischaemic stroke (≤ 6 h) with non-contrast computed tomography.

Tan J, Xiao M, Wang Z, Wu S, Han K, Wang H, Huang Y

pubmed logopapersJul 9 2025
In most medical centers, particularly in primary hospitals, non-contrast computed tomography (NCCT) serves as the primary imaging modality for diagnosing acute ischemic stroke. However, due to the small density difference between the infarct and the surrounding normal brain tissue on NCCT images within the initial 6 h post-onset, it poses significant challenges in promptly and accurately positioning and quantifying the infarct at the early stage. To investigate whether a radiomics-based model using NCCT could effectively assess the risk of acute ischemic stroke (AIS). This study proposed a machine learning (ML) for infarct detection, enabling automated quantitative assessment of AIS lesions on NCCT images. In this retrospective study, NCCT images from 228 patients with AIS (< 6 h from onset) were included, and paired with MRI-diffusion-weighted imaging (DWI) images (attained within 1 to 7 days of onset). NCCT and DWI images were co-registered using the Elastix toolbox. The internal dataset (153 AIS patients) included 179 AIS VOIs and 153 non-AIS VOIs as the training and validation groups. Subsequent cases (75 patients) after 2021 served as the independent test set, comprising 94 AIS VOIs and 75 non-AIS VOIs. The random forest (RF) model demonstrated robust diagnostic performance across the training, validation, and independent test sets. The areas under the receiver operating characteristic (ROC) curves were 0.858 (95% CI: 0.808-0.908), 0.829 (95% CI: 0.748-0.910), and 0.789 (95% CI: 0.717-0.860), respectively. Accuracies were 79.399%, 77.778%, and 73.965%, while sensitivities were 81.679%, 77.083%, and 68.085%. Specificities were 76.471%, 78.431%, and 81.333%, respectively. NCCT-based radiomics combined with a machine learning model could discriminate between AIS and non-AIS patients within less than 6 h of onset. This approach holds promise for improving early stroke diagnosis and patient outcomes. Not applicable.

MMDental - A multimodal dataset of tooth CBCT images with expert medical records.

Wang C, Zhang Y, Wu C, Liu J, Wu L, Wang Y, Huang X, Feng X, Wang Y

pubmed logopapersJul 9 2025
In the rapidly evolving field of dental intelligent healthcare, where Artificial Intelligence (AI) plays a pivotal role, the demand for multimodal datasets is critical. Existing public datasets are primarily composed of single-modal data, predominantly dental radiographs or scans, which limits the development of AI-driven applications for intelligent dental treatment. In this paper, we collect a MultiModal Dental (MMDental) dataset to address this gap. MMDental comprises data from 660 patients, including 3D Cone-beam Computed Tomography (CBCT) images and corresponding detailed expert medical records with initial diagnoses and follow-up documentation. All CBCT scans are conducted under the guidance of professional physicians, and all patient records are reviewed by senior doctors. To the best of our knowledge, this is the first and largest dataset containing 3D CBCT images of teeth with corresponding medical records. Furthermore, we provide a comprehensive analysis of the dataset by exploring patient demographics, prevalence of various dental conditions, and the disease distribution across age groups. We believe this work will be beneficial for further advancements in dental intelligent treatment.
Page 68 of 1421416 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.