Sort by:
Page 60 of 81808 results

Current AI technologies in cancer diagnostics and treatment.

Tiwari A, Mishra S, Kuo TR

pubmed logopapersJun 2 2025
Cancer continues to be a significant international health issue, which demands the invention of new methods for early detection, precise diagnoses, and personalized treatments. Artificial intelligence (AI) has rapidly become a groundbreaking component in the modern era of oncology, offering sophisticated tools across the range of cancer care. In this review, we performed a systematic survey of the current status of AI technologies used for cancer diagnoses and therapeutic approaches. We discuss AI-facilitated imaging diagnostics using a range of modalities such as computed tomography, magnetic resonance imaging, positron emission tomography, ultrasound, and digital pathology, highlighting the growing role of deep learning in detecting early-stage cancers. We also explore applications of AI in genomics and biomarker discovery, liquid biopsies, and non-invasive diagnoses. In therapeutic interventions, AI-based clinical decision support systems, individualized treatment planning, and AI-facilitated drug discovery are transforming precision cancer therapies. The review also evaluates the effects of AI on radiation therapy, robotic surgery, and patient management, including survival predictions, remote monitoring, and AI-facilitated clinical trials. Finally, we discuss important challenges such as data privacy, interpretability, and regulatory issues, and recommend future directions that involve the use of federated learning, synthetic biology, and quantum-boosted AI. This review highlights the groundbreaking potential of AI to revolutionize cancer care by making diagnostics, treatments, and patient management more precise, efficient, and personalized.

Tomographic Foundation Model -- FORCE: Flow-Oriented Reconstruction Conditioning Engine

Wenjun Xia, Chuang Niu, Ge Wang

arxiv logopreprintJun 2 2025
Computed tomography (CT) is a major medical imaging modality. Clinical CT scenarios, such as low-dose screening, sparse-view scanning, and metal implants, often lead to severe noise and artifacts in reconstructed images, requiring improved reconstruction techniques. The introduction of deep learning has significantly advanced CT image reconstruction. However, obtaining paired training data remains rather challenging due to patient motion and other constraints. Although deep learning methods can still perform well with approximately paired data, they inherently carry the risk of hallucination due to data inconsistencies and model instability. In this paper, we integrate the data fidelity with the state-of-the-art generative AI model, referred to as the Poisson flow generative model (PFGM) with a generalized version PFGM++, and propose a novel CT framework: Flow-Oriented Reconstruction Conditioning Engine (FORCE). In our experiments, the proposed method shows superior performance in various CT imaging tasks, outperforming existing unsupervised reconstruction approaches.

Robust multi-coil MRI reconstruction via self-supervised denoising.

Aali A, Arvinte M, Kumar S, Arefeen YI, Tamir JI

pubmed logopapersJun 2 2025
To examine the effect of incorporating self-supervised denoising as a pre-processing step for training deep learning (DL) based reconstruction methods on data corrupted by Gaussian noise. K-space data employed for training are typically multi-coil and inherently noisy. Although DL-based reconstruction methods trained on fully sampled data can enable high reconstruction quality, obtaining large, noise-free datasets is impractical. We leverage Generalized Stein's Unbiased Risk Estimate (GSURE) for denoising. We evaluate two DL-based reconstruction methods: Diffusion Probabilistic Models (DPMs) and Model-Based Deep Learning (MoDL). We evaluate the impact of denoising on the performance of these DL-based methods in solving accelerated multi-coil magnetic resonance imaging (MRI) reconstruction. The experiments were carried out on T2-weighted brain and fat-suppressed proton-density knee scans. We observed that self-supervised denoising enhances the quality and efficiency of MRI reconstructions across various scenarios. Specifically, employing denoised images rather than noisy counterparts when training DL networks results in lower normalized root mean squared error (NRMSE), higher structural similarity index measure (SSIM) and peak signal-to-noise ratio (PSNR) across different SNR levels, including 32, 22, and 12 dB for T2-weighted brain data, and 24, 14, and 4 dB for fat-suppressed knee data. We showed that denoising is an essential pre-processing technique capable of improving the efficacy of DL-based MRI reconstruction methods under diverse conditions. By refining the quality of input data, denoising enables training more effective DL networks, potentially bypassing the need for noise-free reference MRI scans.

Artificial Intelligence-Driven Innovations in Diabetes Care and Monitoring

Abdul Rahman, S., Mahadi, M., Yuliana, D., Budi Susilo, Y. K., Ariffin, A. E., Amgain, K.

medrxiv logopreprintJun 2 2025
This study explores Artificial Intelligence (AI)s transformative role in diabetes care and monitoring, focusing on innovations that optimize patient outcomes. AI, particularly machine learning and deep learning, significantly enhances early detection of complications like diabetic retinopathy and improves screening efficacy. The methodology employs a bibliometric analysis using Scopus, VOSviewer, and Publish or Perish, analyzing 235 articles from 2023-2025. Results indicate a strong interdisciplinary focus, with Computer Science and Medicine being dominant subject areas (36.9% and 12.9% respectively). Bibliographic coupling reveals robust international collaborations led by the U.S. (1558.52 link strength), UK, and China, with key influential documents by Zhu (2023c) and Annuzzi (2023). This research highlights AIs impact on enhancing monitoring, personalized treatment, and proactive care, while acknowledging challenges in data privacy and ethical deployment. Future work should bridge technological advancements with real-world implementation to create equitable and efficient diabetes care systems.

Estimating patient-specific organ doses from head and abdominal CT scans via machine learning with optimized regulation strength and feature quantity.

Shao W, Qu L, Lin X, Yun W, Huang Y, Zhuo W, Liu H

pubmed logopapersJun 1 2025
This study aims to investigate estimation of patient-specific organ doses from CT scans via radiomics feature-based SVR models with training parameter optimization, and maximize SVR models' predictive accuracy and robustness via fine-tuning regularization parameter and input feature quantities. CT images from head and abdominal scans underwent processing using DeepViewer®, an auto-segmentation tool for defining regions of interest (ROIs) of their organs. Radiomics features were extracted from the CT data and ROIs. Benchmark organ doses were then calculated through Monte Carlo (MC) simulations. SVR models, utilizing these extracted radiomics features as inputs for model training, were employed to predict patient-specific organ doses from CT scans. The trained SVR models underwent optimization by adjusting parameters for the input radiomics feature quantity and regulation parameter, resulting in appropriate configurations for accurate patient-specific organ dose predictions. The C values of 5 and 10 have made the SVR models arrive at a saturation state for the head and abdominal organs. The SVR models' MAPE and R<sup>2</sup> strongly depend on organ types. The appropriate parameters respectively are C = 5 or 10 coupled with input feature quantities of 50 for the brain and 200 for the left eye, right eye, left lens, and right lens. the appropriate parameters would be C = 5 or 10 accompanying input feature quantities of 80 for the bowel, 50 for the left kidney, right kidney, and 100 for the liver. Performance optimization of selecting appropriate combinations of input feature quantity and regulation parameters can maximize the predictive accuracy and robustness of radiomics feature-based SVR models in the realm of patient-specific organ dose predictions from CT scans.

Information Geometric Approaches for Patient-Specific Test-Time Adaptation of Deep Learning Models for Semantic Segmentation.

Ravishankar H, Paluru N, Sudhakar P, Yalavarthy PK

pubmed logopapersJun 1 2025
The test-time adaptation (TTA) of deep-learning-based semantic segmentation models, specific to individual patient data, was addressed in this study. The existing TTA methods in medical imaging are often unconstrained, require anatomical prior information or additional neural networks built during training phase, making them less practical, and prone to performance deterioration. In this study, a novel framework based on information geometric principles was proposed to achieve generic, off-the-shelf, regularized patient-specific adaptation of models during test-time. By considering the pre-trained model and the adapted models as part of statistical neuromanifolds, test-time adaptation was treated as constrained functional regularization using information geometric measures, leading to improved generalization and patient optimality. The efficacy of the proposed approach was shown on three challenging problems: 1) improving generalization of state-of-the-art models for segmenting COVID-19 anomalies in Computed Tomography (CT) images 2) cross-institutional brain tumor segmentation from magnetic resonance (MR) images, 3) segmentation of retinal layers in Optical Coherence Tomography (OCT) images. Further, it was demonstrated that robust patient-specific adaptation can be achieved without adding significant computational burden, making it first of its kind based on information geometric principles.

GDP-Net: Global Dependency-Enhanced Dual-Domain Parallel Network for Ring Artifact Removal.

Zhang Y, Liu G, Liu Y, Xie S, Gu J, Huang Z, Ji X, Lyu T, Xi Y, Zhu S, Yang J, Chen Y

pubmed logopapersJun 1 2025
In Computed Tomography (CT) imaging, the ring artifacts caused by the inconsistent detector response can significantly degrade the reconstructed images, having negative impacts on the subsequent applications. The new generation of CT systems based on photon-counting detectors are affected by ring artifacts more severely. The flexibility and variety of detector responses make it difficult to build a well-defined model to characterize the ring artifacts. In this context, this study proposes the global dependency-enhanced dual-domain parallel neural network for Ring Artifact Removal (RAR). First, based on the fact that the features of ring artifacts are different in Cartesian and Polar coordinates, the parallel architecture is adopted to construct the deep neural network so that it can extract and exploit the latent features from different domains to improve the performance of ring artifact removal. Besides, the ring artifacts are globally relevant whether in Cartesian or Polar coordinate systems, but convolutional neural networks show inherent shortcomings in modeling long-range dependency. To tackle this problem, this study introduces the novel Mamba mechanism to achieve a global receptive field without incurring high computational complexity. It enables effective capture of the long-range dependency, thereby enhancing the model performance in image restoration and artifact reduction. The experiments on the simulated data validate the effectiveness of the dual-domain parallel neural network and the Mamba mechanism, and the results on two unseen real datasets demonstrate the promising performance of the proposed RAR algorithm in eliminating ring artifacts and recovering image details.

Scale-Aware Super-Resolution Network With Dual Affinity Learning for Lesion Segmentation From Medical Images.

Luo L, Li Y, Chai Z, Lin H, Heng PA, Chen H

pubmed logopapersJun 1 2025
Convolutional neural networks (CNNs) have shown remarkable progress in medical image segmentation. However, the lesion segmentation remains a challenge to state-of-the-art CNN-based algorithms due to the variance in scales and shapes. On the one hand, tiny lesions are hard to delineate precisely from the medical images which are often of low resolutions. On the other hand, segmenting large-size lesions requires large receptive fields, which exacerbates the first challenge. In this article, we present a scale-aware super-resolution (SR) network to adaptively segment lesions of various sizes from low-resolution (LR) medical images. Our proposed network contains dual branches to simultaneously conduct lesion mask SR (LMSR) and lesion image SR (LISR). Meanwhile, we introduce scale-aware dilated convolution (SDC) blocks into the multitask decoders to adaptively adjust the receptive fields of the convolutional kernels according to the lesion sizes. To guide the segmentation branch to learn from richer high-resolution (HR) features, we propose a feature affinity (FA) module and a scale affinity (SA) module to enhance the multitask learning of the dual branches. On multiple challenging lesion segmentation datasets, our proposed network achieved consistent improvements compared with other state-of-the-art methods. Code will be available at: https://github.com/poiuohke/SASR_Net.

Dental practitioners versus artificial intelligence software in assessing alveolar bone loss using intraoral radiographs.

Almarghlani A, Fakhri J, Almarhoon A, Ghonaim G, Abed H, Sharka R

pubmed logopapersJun 1 2025
Integrating artificial intelligence (AI) in the dental field can potentially enhance the efficiency of dental care. However, few studies have investigated whether AI software can achieve results comparable to those obtained by dental practitioners (general practitioners (GPs) and specialists) when assessing alveolar bone loss in a clinical setting. Thus, this study compared the performance of AI in assessing periodontal bone loss with those of GPs and specialists. This comparative cross-sectional study evaluated the performance of dental practitioners and AI software in assessing alveolar bone loss. Radiographs were randomly selected to ensure representative samples. Dental practitioners independently evaluated the radiographs, and the AI software "Second Opinion Software" was tested using the same set of radiographs evaluated by the dental practitioners. The results produced by the AI software were then compared with the baseline values to measure their accuracy and allow direct comparison with the performance of human specialists. The survey received 149 responses, where each answered 10 questions to compare the measurements made by AI and dental practitioners when assessing the amount of bone loss radiographically. The mean estimates of the participants had a moderate positive correlation with the radiographic measurements (rho = 0.547, <i>p</i> < 0.001) and a weaker but still significant correlation with AI measurements (rho = 0.365, <i>p</i> < 0.001). AI measurements had a stronger positive correlation with the radiographic measurements (rho = 0.712, <i>p</i> < 0.001) compared with their correlation with the estimates of dental practitioners. This study highlights the capacity of AI software to enhance the accuracy and efficiency of radiograph-based evaluations of alveolar bone loss. Dental practitioners are vital for the clinical experience but AI technology provides a consistent and replicable methodology. Future collaborations between AI experts, researchers, and practitioners could potentially optimize patient care.

Ultrasound measurement of relative tongue size and its correlation with tongue mobility for healthy individuals.

Sun J, Kitamura T, Nota Y, Yamane N, Hayashi R

pubmed logopapersJun 1 2025
The size of an individual's tongue relative to the oral cavity is associated with articulation speed [Feng, Lu, Zheng, Chi, and Honda, in Proceedings of the 10th Biennial Asia Pacific Conference on Speech, Language, and Hearing (2017), pp. 17-19)] and may affect speech clarity. This study introduces an ultrasound-based method for measuring relative tongue size, termed ultrasound-based relative tongue size (uRTS), as a cost-effective alternative to the magnetic resonance imaging (MRI) based method. Using deep learning to extract the tongue contour, uRTS was calculated from tongue and oropharyngeal cavity sizes in the midsagittal plane. Results from ten speakers showed a strong correlation between uRTS and MRI-based measurements (r = 0.87) and a negative correlation with tongue movement speed (r = -0.73), indicating uRTS is a useful index for assessing tongue size.
Page 60 of 81808 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.