Sort by:
Page 96 of 1021014 results

CT-based AI framework leveraging multi-scale features for predicting pathological grade and Ki67 index in clear cell renal cell carcinoma: a multicenter study.

Yang H, Zhang Y, Li F, Liu W, Zeng H, Yuan H, Ye Z, Huang Z, Yuan Y, Xiang Y, Wu K, Liu H

pubmed logopapersMay 14 2025
To explore whether a CT-based AI framework, leveraging multi-scale features, can offer a non-invasive approach to accurately predict pathological grade and Ki67 index in clear cell renal cell carcinoma (ccRCC). In this multicenter retrospective study, a total of 1073 pathologically confirmed ccRCC patients from seven cohorts were split into internal cohorts (training and validation sets) and an external test set. The AI framework comprised an image processor, a 3D-kidney and tumor segmentation model by 3D-UNet, a multi-scale features extractor built upon unsupervised learning, and a multi-task classifier utilizing XGBoost. A quantitative model interpretation technique, known as SHapley Additive exPlanations (SHAP), was employed to explore the contribution of multi-scale features. The 3D-UNet model showed excellent performance in segmenting both the kidney and tumor regions, with Dice coefficients exceeding 0.92. The proposed multi-scale features model exhibited strong predictive capability for pathological grading and Ki67 index, with AUROC values of 0.84 and 0.87, respectively, in the internal validation set, and 0.82 and 0.82, respectively, in the external test set. The SHAP results demonstrated that features from radiomics, the 3D Auto-Encoder, and dimensionality reduction all made significant contributions to both prediction tasks. The proposed AI framework, leveraging multi-scale features, accurately predicts the pathological grade and Ki67 index of ccRCC. The CT-based AI framework leveraging multi-scale features offers a promising avenue for accurately predicting the pathological grade and Ki67 index of ccRCC preoperatively, indicating a direction for non-invasive assessment. Non-invasively determining pathological grade and Ki67 index in ccRCC could guide treatment decisions. The AI framework integrates segmentation, classification, and model interpretation, enabling fully automated analysis. The AI framework enables non-invasive preoperative detection of high-risk tumors, assisting clinical decision-making.

AI-based metal artefact correction algorithm for radiotherapy patients with dental hardware in head and neck CT: Towards precise imaging.

Yu X, Zhong S, Zhang G, Du J, Wang G, Hu J

pubmed logopapersMay 14 2025
To investigate the clinical efficiency of an AI-based metal artefact correction algorithm (AI-MAC), for reducing dental metal artefacts in head and neck CT, compared to conventional interpolation-based MAC. We retrospectively collected 41 patients with non-removal dental hardware who underwent non-contrast head and neck CT prior to radiotherapy. All images were reconstructed with standard reconstruction algorithm (SRA), and were additionally processed with both conventional MAC and AI-MAC. The image quality of SRA, MAC and AI-MAC were compared by qualitative scoring on a 5-point scale, with scores ≥ 3 considered interpretable. This was followed by a quantitative evaluation, including signal-to-noise ratio (SNR) and artefact index (Idxartefact). Organ contouring accuracy was quantified via calculating the dice similarity coefficient (DSC) and hausdorff distance (HD) for oral cavity and teeth, using the clinically accepted contouring as reference. Moreover, the treatment planning dose distribution for oral cavity was assessed. AI-MAC yielded superior qualitative image quality as well as quantitative metrics, including SNR and Idxartefact, to SRA and MAC. The image interpretability significantly improved from 41.46% for SRA and 56.10% for MAC to 92.68% for AI-MAC (p < 0.05). Compared to SRA and MAC, the best DSC and HD for both oral cavity and teeth were obtained on AI-MAC (all p < 0.05). No significant differences for dose distribution were found among the three image sets. AI-MAC outperforms conventional MAC in metal artefact reduction, achieving superior image quality with high image interpretability for patients with dental hardware undergoing head and neck CT. Furthermore, the use of AI-MAC improves the accuracy of organ contouring while providing consistent dose calculation against metal artefacts in radiotherapy. AI-MAC is a novel deep learning-based technique for reducing metal artefacts on CT. This in-vivo study first demonstrated its capability of reducing metal artefacts while preserving organ visualization, as compared with conventional MAC.

An incremental algorithm for non-convex AI-enhanced medical image processing

Elena Morotti

arxiv logopreprintMay 13 2025
Solving non-convex regularized inverse problems is challenging due to their complex optimization landscapes and multiple local minima. However, these models remain widely studied as they often yield high-quality, task-oriented solutions, particularly in medical imaging, where the goal is to enhance clinically relevant features rather than merely minimizing global error. We propose incDG, a hybrid framework that integrates deep learning with incremental model-based optimization to efficiently approximate the $\ell_0$-optimal solution of imaging inverse problems. Built on the Deep Guess strategy, incDG exploits a deep neural network to generate effective initializations for a non-convex variational solver, which refines the reconstruction through regularized incremental iterations. This design combines the efficiency of Artificial Intelligence (AI) tools with the theoretical guarantees of model-based optimization, ensuring robustness and stability. We validate incDG on TpV-regularized optimization tasks, demonstrating its effectiveness in medical image deblurring and tomographic reconstruction across diverse datasets, including synthetic images, brain CT slices, and chest-abdomen scans. Results show that incDG outperforms both conventional iterative solvers and deep learning-based methods, achieving superior accuracy and stability. Moreover, we confirm that training incDG without ground truth does not significantly degrade performance, making it a practical and powerful tool for solving non-convex inverse problems in imaging and beyond.

Development of a deep learning method for phase retrieval image enhancement in phase contrast microcomputed tomography.

Ding XF, Duan X, Li N, Khoz Z, Wu FX, Chen X, Zhu N

pubmed logopapersMay 13 2025
Propagation-based imaging (one method of X-ray phase contrast imaging) with microcomputed tomography (PBI-µCT) offers the potential to visualise low-density materials, such as soft tissues and hydrogel constructs, which are difficult to be identified by conventional absorption-based contrast µCT. Conventional µCT reconstruction produces edge-enhanced contrast (EEC) images which preserve sharp boundaries but are susceptible to noise and do not provide consistent grey value representation for the same material. Meanwhile, phase retrieval (PR) algorithms can convert edge enhanced contrast to area contrast to improve signal-to-noise ratio (SNR), and contrast-to-noise ratio (CNR) but usually results to over-smoothing, thus creating inaccuracies in quantitative analysis. To alleviate these problems, this study developed a deep learning-based method called edge view enhanced phase retrieval (EVEPR), by strategically integrating the complementary spatial features of denoised EEC and PR images, and further applied this method to segment the hydrogel constructs in vivo and ex vivo. EVEPR used paired denoised EEC and PR images to train a deep convolutional neural network (CNN) on a dataset-to-dataset basis. The CNN had been trained on important high-frequency details, for example, edges and boundaries from the EEC image and area contrast from PR images. The CNN predicted result showed enhanced area contrast beyond conventional PR algorithms while improving SNR and CNR. The enhanced CNR especially allowed for the image to be segmented with greater efficiency. EVEPR was applied to in vitro and ex vivo PBI-µCT images of low-density hydrogel constructs. The enhanced visibility and consistency of hydrogel constructs was essential for segmenting such material which usually exhibit extremely poor contrast. The EVEPR images allowed for more accurate segmentation with reduced manual adjustments. The efficiency in segmentation allowed for the generation of a sizeable database of segmented hydrogel scaffolds which were used in conventional data-driven segmentation applications. EVEPR was demonstrated to be a robust post-image processing method capable of significantly enhancing image quality by training a CNN on paired denoised EEC and PR images. This method not only addressed the common issues of over-smoothing and noise susceptibility in conventional PBI-µCT image processing but also allowed for efficient and accurate in vitro and ex vivo image processing applications of low-density materials.

Automatic deep learning segmentation of mandibular periodontal bone topography on cone-beam computed tomography images.

Palkovics D, Molnar B, Pinter C, García-Mato D, Diaz-Pinto A, Windisch P, Ramseier CA

pubmed logopapersMay 13 2025
This study evaluated the performance of a multi-stage Segmentation Residual Network (SegResNet)-based deep learning (DL) model for the automatic segmentation of cone-beam computed tomography (CBCT) images of patients with stage III and IV periodontitis. Seventy pre-processed CBCT scans from patients undergoing periodontal rehabilitation were used for training and validation. The model was tested on 10 CBCT scans independent from the training dataset by comparing results with semi-automatic (SA) segmentations. Segmentation accuracy was assessed using the Dice similarity coefficient (DSC), Intersection over Union (IoU), and Hausdorff distance 95<sup>th</sup> percentile (HD95). Linear periodontal measurements were performed on four tooth surfaces to assess the validity of the DL segmentation in the periodontal region. The DL model achieved a mean DSC of 0.9650 ± 0.0097, with an IoU of 0.9340 ± 0.0180 and HD95 of 0.4820 mm ± 0.1269 mm, showing strong agreement with SA segmentation. Linear measurements revealed high statistical correlations between the mesial, distal, and lingual surfaces, with intraclass correlation coefficients (ICC) of 0.9442 (p<0.0001), 0.9232 (p<0.0001), and 0.9598(p<0.0001), respectively, while buccal measurements revealed lower consistency, with an ICC of 0.7481 (p<0.0001). The DL method reduced the segmentation time by 47 times compared to the SA method. Acquired 3D models may enable precise treatment planning in cases where conventional diagnostic modalities are insufficient. However, the robustness of the model must be increased to improve its general reliability and consistency at the buccal aspect of the periodontal region. This study presents a DL model for the CBCT-based segmentation of periodontal defects, demonstrating high accuracy and a 47-fold time reduction compared to SA methods, thus improving the feasibility of 3D diagnostics for advanced periodontitis.

Segmentation of renal vessels on non-enhanced CT images using deep learning models.

Zhong H, Zhao Y, Zhang Y

pubmed logopapersMay 13 2025
To evaluate the possibility of performing renal vessel reconstruction on non-enhanced CT images using deep learning models. 177 patients' CT scans in the non-enhanced phase, arterial phase and venous phase were chosen. These data were randomly divided into the training set (n = 120), validation set (n = 20) and test set (n = 37). In training set and validation set, a radiologist marked out the right renal arteries and veins on non-enhanced CT phase images using contrast phases as references. Trained deep learning models were tested and evaluated on the test set. A radiologist performed renal vessel reconstruction on the test set without the contrast phase reference, and the results were used for comparison. Reconstruction using the arterial phase and venous phase was used as the gold standard. Without the contrast phase reference, both radiologist and model could accurately identify artery and vein main trunk. The accuracy was 91.9% vs. 97.3% (model vs. radiologist) in artery and 91.9% vs. 100% in vein, the difference was insignificant. The model had difficulty identify accessory arteries, the accuracy was significantly lower than radiologist (44.4% vs. 77.8%, p = 0.044). The model also had lower accuracy in accessory veins, but the difference was insignificant (64.3% vs. 85.7%, p = 0.094). Deep learning models could accurately recognize the right renal artery and vein main trunk, and accuracy was comparable to that of radiologists. Although the current model still had difficulty recognizing small accessory vessels, further training and model optimization would solve these problems.

Artificial intelligence for chronic total occlusion percutaneous coronary interventions.

Rempakos A, Pilla P, Alexandrou M, Mutlu D, Strepkos D, Carvalho PEP, Ser OS, Bahbah A, Amin A, Prasad A, Azzalini L, Ybarra LF, Mastrodemos OC, Rangan BV, Al-Ogaili A, Jalli S, Burke MN, Sandoval Y, Brilakis ES

pubmed logopapersMay 13 2025
Artificial intelligence (AI) has become pivotal in advancing medical care, particularly in interventional cardiology. Recent AI developments have proven effective in guiding advanced procedures and complex decisions. The authors review the latest AI-based innovations in the diagnosis of chronic total occlusions (CTO) and in determining the probability of success of CTO percutaneous coronary intervention (PCI). Neural networks and deep learning strategies were the most commonly used algorithms, and the models were trained and deployed using a variety of data types, such as clinical parameters and imaging. AI holds great promise in facilitating CTO PCI.

Blockchain enabled collective and combined deep learning framework for COVID19 diagnosis.

Periyasamy S, Kaliyaperumal P, Thirumalaisamy M, Balusamy B, Elumalai T, Meena V, Jadoun VK

pubmed logopapersMay 13 2025
The rapid spread of SARS-CoV-2 has highlighted the need for intelligent methodologies in COVID-19 diagnosis. Clinicians face significant challenges due to the virus's fast transmission rate and the lack of reliable diagnostic tools. Although artificial intelligence (AI) has improved image processing, conventional approaches still rely on centralized data storage and training. This reliance increases complexity and raises privacy concerns, which hinder global data exchange. Therefore, it is essential to develop collaborative models that balance accuracy with privacy protection. This research presents a novel framework that combines blockchain technology with a combined learning paradigm to ensure secure data distribution and reduced complexity. The proposed Combined Learning Collective Deep Learning Blockchain Model (CLCD-Block) aggregates data from multiple institutions and leverages a hybrid capsule learning network for accurate predictions. Extensive testing with lung CT images demonstrates that the model outperforms existing models, achieving an accuracy exceeding 97%. Specifically, on four benchmark datasets, CLCD-Block achieved up to 98.79% Precision, 98.84% Recall, 98.79% Specificity, 98.81% F1-Score, and 98.71% Accuracy, showcasing its superior diagnostic capability. Designed for COVID-19 diagnosis, the CLCD-Block framework is adaptable to other applications, integrating AI, decentralized training, privacy protection, and secure blockchain collaboration. It addresses challenges in diagnosing chronic diseases, facilitates cross-institutional research and monitors infectious outbreaks. Future work will focus on enhancing scalability, optimizing real-time performance and adapting the model for broader healthcare datasets.

Rethinking femoral neck anteversion assessment: a novel automated 3D CT method compared to traditional manual techniques.

Xiao H, Yibulayimu S, Zhao C, Sang Y, Chen Y, Ge Y, Sun Q, Ming Y, Bei M, Zhu G, Song Y, Wang Y, Wu X

pubmed logopapersMay 13 2025
To evaluate the accuracy and reliability of a novel automated 3D CT-based method for measuring femoral neck anteversion (FNA) compared to three traditional manual methods. A total of 126 femurs from 63 full-length CT scans (35 men and 28 women; average age: 52.0 ± 14.7 years) were analyzed. The automated method used a deep learning network for femur segmentation, landmark identification, and anteversion calculation, with results generated based on two axes: Auto_GT (using the greater trochanter-to-intercondylar notch center axis) and Auto_P (using the piriformis fossa-to-intercondylar notch center axis). These results were validated through manual landmark annotation. The same dataset was assessed using three conventional manual methods: Murphy, Reikeras, and Lee methods. Intra- and inter-observer reliability were assessed using intraclass correlation coefficients (ICCs), and pairwise comparisons analyzed correlations and differences between methods. The automated methods produced consistent FNA measurements (Auto_GT: 17.59 ± 9.16° vs. Auto_P: 17.37 ± 9.17° on the right; 15.08 ± 9.88° vs. 14.84 ± 9.90° on the left). Intra-observer ICCs ranged from 0.864 to 0.961, and inter-observer ICCs between Auto_GT and the manual methods were high, except for the Lee method. No significant differences were observed between the two automated methods or between the automated and manual verification methods. Moreover, strong correlations (R > 0.9, p < 0.001) were found between Auto_GT and the manual methods. The novel automated 3D CT-based method demonstrates strong reproducibility and reliability for measuring femoral neck anteversion, with performance comparable to traditional manual techniques. These results indicate its potential utility for preoperative planning, postoperative evaluation, and computer-assisted orthopedic procedures. Not applicable.

Development and validation of an early diagnosis model for severe mycoplasma pneumonia in children based on interpretable machine learning.

Xie S, Wu M, Shang Y, Tuo W, Wang J, Cai Q, Yuan C, Yao C, Xiang Y

pubmed logopapersMay 13 2025
Pneumonia is a major threat to the health of children, especially those under the age of five. Mycoplasma  pneumoniae infection is a core cause of pediatric pneumonia, and the incidence of severe mycoplasma pneumoniae pneumonia (SMPP) has increased in recent years. Therefore, there is an urgent need to establish an early warning model for SMPP to improve the prognosis of pediatric pneumonia. The study comprised 597 SMPP patients aged between 1 month and 18 years. Clinical data were selected through Lasso regression analysis, followed by the application of eight machine learning algorithms to develop early warning model. The accuracy of the model was assessed using validation and prospective cohort. To facilitate clinical assessment, the study simplified the indicators and constructed visualized simplified model. The clinical applicability of the model was evaluated by DCA and CIC curve. After variable selection, eight machine learning models were developed using age, sex and 21 serum indicators identified as predictive factors for SMPP. A Light Gradient Boosting Machine (LightGBM) model demonstrated strong performance, achieving AUC of 0.92 for prospective validation. The SHAP analysis was utilized to screen advantageous variables, which contains of serum S100A8/A9, tracheal computed tomography (CT), retinol-binding protein(RBP), platelet larger cell ratio(P-LCR) and CD4+CD25+Treg cell counts, for constructing a simplified model (SCRPT) to improve clinical applicability. The SCRPT diagnostic model exhibited favorable diagnostic efficacy (AUC > 0.8). Additionally, the study found that S100A8/A9 outperformed clinical inflammatory markers can also differentiate the severity of MPP. The SCRPT model consisting of five dominant variables (S100A8/A9, CT, RBP, PLCR and Treg cell) screened based on eight machine learning is expected to be a tool for early diagnosis of SMPP. S100A8/A9 can also be used as a biomarker for validity differentiation of SMPP when medical conditions are limited.
Page 96 of 1021014 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.