Sort by:
Page 200 of 3963955 results

Automated classification of chondroid tumor using 3D U-Net and radiomics with deep features.

Le Dinh T, Lee S, Park H, Lee S, Choi H, Chun KS, Jung JY

pubmed logopapersJul 1 2025
Classifying chondroid tumors is an essential step for effective treatment planning. Recently, with the advances in computer-aided diagnosis and the increasing availability of medical imaging data, automated tumor classification using deep learning shows promise in assisting clinical decision-making. In this study, we propose a hybrid approach that integrates deep learning and radiomics for chondroid tumor classification. First, we performed tumor segmentation using the nnUNetv2 framework, which provided three-dimensional (3D) delineation of tumor regions of interest (ROIs). From these ROIs, we extracted a set of radiomics features and deep learning-derived features. After feature selection, we identified 15 radiomics and 15 deep features to build classification models. We developed 5 machine learning classifiers including Random Forest, XGBoost, Gradient Boosting, LightGBM, and CatBoost for the classification models. The approach integrating features from radiomics, ROI-originated deep learning features, and clinical variables yielded the best overall classification results. Among the classifiers, CatBoost classifier achieved the highest accuracy of 0.90 (95% CI 0.90-0.93), a weighted kappa of 0.85, and an AUC of 0.91. These findings highlight the potential of integrating 3D U-Net-assisted segmentation with radiomics and deep learning features to improve classification of chondroid tumors.

Patient-specific deep learning tracking for real-time 2D pancreas localisation in kV-guided radiotherapy.

Ahmed AM, Madden L, Stewart M, Chow BVY, Mylonas A, Brown R, Metz G, Shepherd M, Coronel C, Ambrose L, Turk A, Crispin M, Kneebone A, Hruby G, Keall P, Booth JT

pubmed logopapersJul 1 2025
In pancreatic stereotactic body radiotherapy (SBRT), accurate motion management is crucial for the safe delivery of high doses per fraction. Intra-fraction tracking with magnetic resonance imaging-guidance for gated SBRT has shown potential for improved local control. Visualisation of pancreas (and surrounding organs) remains challenging in intra-fraction kilo-voltage (kV) imaging, requiring implanted fiducials. In this study, we investigate patient-specific deep-learning approaches to track the gross-tumour-volume (GTV), pancreas-head and the whole-pancreas in intra-fraction kV images. Conditional-generative-adversarial-networks were trained and tested on data from 25 patients enrolled in an ethics-approved pancreatic SBRT trial for contour prediction on intra-fraction 2D kV images. Labelled digitally-reconstructed-radiographs (DRRs) were generated from contoured planning-computed-tomography (CTs) (CT-DRRs) and cone-beam-CTs (CBCT-DRRs). A population model was trained using CT-DRRs of 19 patients. Two patient-specific model types were created for six additional patients by fine-tuning the population model using CBCT-DRRs (CBCT-models) or CT-DRRs (CT-models) acquired in exhale-breath-hold. Model predictions on unseen triggered-kV images from the corresponding six patients were evaluated against projected-contours using Dice-Similarity-Coefficient (DSC), centroid-error (CE), average Hausdorff-distance (AHD), and Hausdorff-distance at 95th-percentile (HD95). The mean ± 1SD (standard-deviation) DSCs were 0.86 ± 0.09 (CBCT-models) and 0.78 ± 0.12 (CT-models). For AHD and CE, the CBCT-model predicted contours within 2.0 mm ≥90.3 % of the time, while HD95 was within 5.0 mm ≥90.0 % of the time, and had a prediction time of 29.2 ± 3.7 ms per contour. The patient-specific CBCT-models outperformed the CT-models and predicted the three contours with 90th-percentile error ≤2.0 mm, indicating the potential for clinical real-time application.

Deep learning-based lung cancer classification of CT images.

Faizi MK, Qiang Y, Wei Y, Qiao Y, Zhao J, Aftab R, Urrehman Z

pubmed logopapersJul 1 2025
Lung cancer remains a leading cause of cancer-related deaths worldwide, with accurate classification of lung nodules being critical for early diagnosis. Traditional radiological methods often struggle with high false-positive rates, underscoring the need for advanced diagnostic tools. In this work, we introduce DCSwinB, a novel deep learning-based lung nodule classifier designed to improve the accuracy and efficiency of benign and malignant nodule classification in CT images. Built on the Swin-Tiny Vision Transformer (ViT), DCSwinB incorporates several key innovations: a dual-branch architecture that combines CNNs for local feature extraction and Swin Transformer for global feature extraction, and a Conv-MLP module that enhances connections between adjacent windows to capture long-range dependencies in 3D images. Pretrained on the LUNA16 and LUNA16-K datasets, which consist of annotated CT scans from thousands of patients, DCSwinB was evaluated using ten-fold cross-validation. The model demonstrated superior performance, achieving 90.96% accuracy, 90.56% recall, 89.65% specificity, and an AUC of 0.94, outperforming existing models such as ResNet50 and Swin-T. These results highlight the effectiveness of DCSwinB in enhancing feature representation while optimizing computational efficiency. By improving the accuracy and reliability of lung nodule classification, DCSwinB has the potential to assist radiologists in reducing diagnostic errors, enabling earlier intervention and improved patient outcomes.

Photon-counting detector CT of the brain reduces variability of Hounsfield units and has a mean offset compared with energy-integrating detector CT.

Stein T, Lang F, Rau S, Reisert M, Russe MF, Schürmann T, Fink A, Kellner E, Weiss J, Bamberg F, Urbach H, Rau A

pubmed logopapersJul 1 2025
Distinguishing gray matter (GM) from white matter (WM) is essential for CT of the brain. The recently established photon-counting detector CT (PCD-CT) technology employs a novel detection technique that might allow more precise measurement of tissue attenuation for an improved delineation of attenuation values (Hounsfield units - HU) and improved image quality in comparison with energy-integrating detector CT (EID-CT). To investigate this, we compared HU, GM vs. WM contrast, and image noise using automated deep learning-based brain segmentations. We retrospectively included patients who received either PCD-CT or EID-CT and did not display a cerebral pathology. A deep learning-based segmentation of the GM and WM was used to extract HU. From this, the gray-to-white ratio and contrast-to-noise ratio were calculated. We included 329 patients with EID-CT (mean age 59.8 ± 20.2 years) and 180 with PCD-CT (mean age 64.7 ± 16.5 years). GM and WM showed significantly lower HU in PCD-CT (GM: 40.4 ± 2.2 HU; WM: 33.4 ± 1.5 HU) compared to EID-CT (GM: 45.1 ± 1.6 HU; WM: 37.4 ± 1.6 HU, p < .001). Standard deviations of HU were also lower in PCD-CT (GM and WM both p < .001) and contrast-tonoise ratio was significantly higher in PCD-CT compared to EID-CT (p < .001). Gray-to-white matter ratios were not significantly different across both modalities (p > .99). In an age-matched subset (n = 157 patients from both cohorts), all findings were replicated. This comprehensive comparison of HU in cerebral gray and white matter revealed substantially reduced image noise and an average offset with lower HU in PCD-CT while the ratio between GM and WM remained constant. The potential need to adapt windowing presets based on this finding should be investigated in future studies. CNR = Contrast-to-Noise Ratio; CTDIvol = Volume Computed Tomography Dose Index; EID = Energy-Integrating Detector; GWR = Gray-to-White Matter Ratio; HU = Hounsfield Units; PCD = Photon-Counting Detector; ROI = Region of Interest; VMI = Virtual Monoenergetic Images.

A Contrast-Enhanced Ultrasound Cine-Based Deep Learning Model for Predicting the Response of Advanced Hepatocellular Carcinoma to Hepatic Arterial Infusion Chemotherapy Combined With Systemic Therapies.

Han X, Peng C, Ruan SM, Li L, He M, Shi M, Huang B, Luo Y, Liu J, Wen H, Wang W, Zhou J, Lu M, Chen X, Zou R, Liu Z

pubmed logopapersJul 1 2025
Recently, a hepatic arterial infusion chemotherapy (HAIC)-associated combination therapeutic regimen, comprising HAIC and systemic therapies (molecular targeted therapy plus immunotherapy), referred to as HAIC combination therapy, has demonstrated promising anticancer effects. Identifying individuals who may potentially benefit from HAIC combination therapy could contribute to improved treatment decision-making for patients with advanced hepatocellular carcinoma (HCC). This dual-center study was a retrospective analysis of prospectively collected data with advanced HCC patients who underwent HAIC combination therapy and pretreatment contrast-enhanced ultrasound (CEUS) evaluations from March 2019 to March 2023. Two deep learning models, AE-3DNet and 3DNet, along with a time-intensity curve-based model, were developed for predicting therapeutic responses from pretreatment CEUS cine images. Diagnostic metrics, including the area under the receiver-operating-characteristic curve (AUC), were calculated to compare the performance of the models. Survival analysis was used to assess the relationship between predicted responses and prognostic outcomes. The model of AE-3DNet was constructed on the top of 3DNet, with innovative incorporation of spatiotemporal attention modules to enhance the capacity for dynamic feature extraction. 326 patients were included, 243 of whom formed the internal validation cohort, which was utilized for model development and fivefold cross-validation, while the rest formed the external validation cohort. Objective response (OR) or non-objective response (non-OR) were observed in 63% (206/326) and 37% (120/326) of the participants, respectively. Among the three efficacy prediction models assessed, AE-3DNet performed superiorly with AUC values of 0.84 and 0.85 in the internal and external validation cohorts, respectively. AE-3DNet's predicted response survival curves closely resembled actual clinical outcomes. The deep learning model of AE-3DNet developed based on pretreatment CEUS cine performed satisfactorily in predicting the responses of advanced HCC to HAIC combination therapy, which may serve as a promising tool for guiding combined therapy and individualized treatment strategies. Trial Registration: NCT02973685.

Computed Tomography Advancements in Plaque Analysis: From Histology to Comprehensive Plaque Burden Assessment.

Catapano F, Lisi C, Figliozzi S, Scialò V, Politi LS, Francone M

pubmed logopapersJul 1 2025
Advancements in coronary computed tomography angiography (CCTA) facilitated the transition from traditional histological approaches to comprehensive plaque burden assessment. Recent updates in the European Society of Cardiology (ESC) guidelines emphasize CCTA's role in managing chronic coronary syndrome by enabling detailed monitoring of atherosclerotic plaque progression. Limitations of conventional CCTA, such as spatial resolution challenges in accurately characterizing plaque components like thin-cap fibroatheromas and necrotic lipid-rich cores, are addressed with photon-counting detector CT (PCD-CT) technology. PCD-CT offers enhanced spatial resolution and spectral imaging, improving the detection and characterization of high-risk plaque features while reducing artifacts. The integration of artificial intelligence (AI) in plaque analysis enhances diagnostic accuracy through automated plaque characterization and radiomics. These technological advancements support a comprehensive approach to plaque assessment, incorporating hemodynamic evaluations, morphological metrics, and AI-driven analysis, thereby enabling personalized patient care and improved prediction of acute clinical events.

Foundation Model and Radiomics-Based Quantitative Characterization of Perirenal Fat in Renal Cell Carcinoma Surgery.

Mei H, Chen H, Zheng Q, Yang R, Wang N, Jiao P, Wang X, Chen Z, Liu X

pubmed logopapersJul 1 2025
To quantitatively characterize the degree of perirenal fat adhesion using artificial intelligence in renal cell carcinoma. This retrospective study analyzed a total of 596 patients from three cohorts, utilizing corticomedullary phase computed tomography urography (CTU) images. The nnUNet v2 network combined with numerical computation was employed to segment the perirenal fat region. Pyradiomics algorithms and a computed tomography foundation model were used to extract features from CTU images separately, creating single-modality predictive models for identifying perirenal fat adhesion. By concatenating the Pyradiomics and foundation model features, an early fusion multimodal predictive signature was developed. The prognostic performance of the single-modality and multimodality models was further validated in two independent cohorts. The nnUNet v2 segmentation model accurately segmented both kidneys. The neural network and thresholding approach effectively delineated the perirenal fat region. Single-modality models based on radiomic and computed tomography foundation features demonstrated a certain degree of accuracy in diagnosing and identifying perirenal fat adhesion, while the early feature fusion diagnostic model outperformed the single-modality models. Also, the perirenal fat adhesion score showed a positive correlation with surgical time and intraoperative blood loss. AI-based radiomics and foundation models can accurately identify the degree of perirenal fat adhesion and have the potential to be used for surgical risk assessment.

Deep Learning Models for CT Segmentation of Invasive Pulmonary Aspergillosis, Mucormycosis, Bacterial Pneumonia and Tuberculosis: A Multicentre Study.

Li Y, Huang F, Chen D, Zhang Y, Zhang X, Liang L, Pan J, Tan L, Liu S, Lin J, Li Z, Hu G, Chen H, Peng C, Ye F, Zheng J

pubmed logopapersJul 1 2025
The differential diagnosis of invasive pulmonary aspergillosis (IPA), pulmonary mucormycosis (PM), bacterial pneumonia (BP) and pulmonary tuberculosis (PTB) are challenging due to overlapping clinical and imaging features. Manual CT lesion segmentation is time-consuming, deep-learning (DL)-based segmentation models offer a promising solution, yet disease-specific models for these infections remain underexplored. We aimed to develop and validate dedicated CT segmentation models for IPA, PM, BP and PTB to enhance diagnostic accuracy. Methods:Retrospective multi-centre data (115 IPA, 53 PM, 130 BP, 125 PTB) were used for training/internal validation, with 21 IPA, 8PM, 30 BP and 31 PTB cases for external validation. Expert-annotated lesions served as ground truth. An improved 3D U-Net architecture was employed for segmentation, with preprocessing steps including normalisations, cropping and data augmentation. Performance was evaluated using Dice coefficients. Results:Internal validation achieved Dice scores of 78.83% (IPA), 93.38% (PM), 80.12% (BP) and 90.47% (PTB). External validation showed slightly reduced but robust performance: 75.09% (IPA), 77.53% (PM), 67.40% (BP) and 80.07% (PTB). The PM model demonstrated exceptional generalisability, scoring 83.41% on IPA data. Cross-validation revealed mutual applicability, with IPA/PTB models achieving > 75% Dice for each other's lesions. BP segmentation showed lower but clinically acceptable performance ( >72%), likely due to complex radiological patterns. Disease-specific DL segmentation models exhibited high accuracy, particularly for PM and PTB. While IPA and BP models require refinement, all demonstrated cross-disease utility, suggesting immediate clinical value for preliminary lesion annotation. Future efforts should enhance datasets and optimise models for intricate cases.

Synthetic Versus Classic Data Augmentation: Impacts on Breast Ultrasound Image Classification.

Medghalchi Y, Zakariaei N, Rahmim A, Hacihaliloglu I

pubmed logopapersJul 1 2025
The effectiveness of deep neural networks (DNNs) for the ultrasound image analysis depends on the availability and accuracy of the training data. However, the large-scale data collection and annotation, particularly in medical fields, is often costly and time consuming, especially when healthcare professionals are already burdened with their clinical responsibilities. Ensuring that a model remains robust across different imaging conditions-such as variations in ultrasound devices and manual transducer operation-is crucial in the ultrasound image analysis. The data augmentation is a widely used solution, as it increases both the size and diversity of datasets, thereby enhancing the generalization performance of DNNs. With the advent of generative networks such as generative adversarial networks (GANs) and diffusion-based models, the synthetic data generation has emerged as a promising augmentation technique. However, comprehensive studies comparing classic and generative method-based augmentation methods are lacking, particularly in ultrasound-based breast cancer imaging, where variability in breast density, tumor morphology, and operator skill poses significant challenges. This study aims to compare the effectiveness of classic and generative network-based data augmentation techniques in improving the performance and robustness of breast ultrasound image classification models. Specifically, we seek to determine whether the computational intensity of generative networks is justified in data augmentation. This analysis will provide valuable insights into the role and benefits of each technique in enhancing the diagnostic accuracy of DNN for breast cancer diagnosis. The code for this work will be available at: ht.tps://github.com/yasamin-med/SCDA.git.

Convolutional neural network-based measurement of crown-implant ratio for implant-supported prostheses.

Zhang JP, Wang ZH, Zhang J, Qiu J

pubmed logopapersJul 1 2025
Research has revealed that the crown-implant ratio (CIR) is a critical variable influencing the long-term stability of implant-supported prostheses in the oral cavity. Nevertheless, inefficient manual measurement and varied measurement methods have caused significant inconvenience in both clinical and scientific work. This study aimed to develop an automated system for detecting the CIR of implant-supported prostheses from radiographs, with the objective of enhancing the efficiency of radiograph interpretation for dentists. The method for measuring the CIR of implant-supported prostheses was based on convolutional neural networks (CNNs) and was designed to recognize implant-supported prostheses and identify key points around it. The experiment used the You Only Look Once version 4 (Yolov4) to locate the implant-supported prosthesis using a rectangular frame. Subsequently, two CNNs were used to identify key points. The first CNN determined the general position of the feature points, while the second CNN finetuned the output of the first network to precisely locate the key points. The network underwent testing on a self-built dataset, and the anatomic CIR and clinical CIR were obtained simultaneously through the vertical distance method. Key point accuracy was validated through Normalized Error (NE) values, and a set of data was selected to compare machine and manual measurement results. For statistical analysis, the paired t test was applied (α=.05). A dataset comprising 1106 images was constructed. The integration of multiple networks demonstrated satisfactory recognition of implant-supported prostheses and their surrounding key points. The average NE value for key points indicated a high level of accuracy. Statistical studies confirmed no significant difference in the crown-implant ratio between machine and manual measurement results (P>.05). Machine learning proved effective in identifying implant-supported prostheses and detecting their crown-implant ratios. If applied as a clinical tool for analyzing radiographs, this research can assist dentists in efficiently and accurately obtaining crown-implant ratio results.
Page 200 of 3963955 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.