Sort by:
Page 10 of 71706 results

The REgistry of Flow and Perfusion Imaging for Artificial INtelligEnce with PET(REFINE PET): Rationale and Design.

Ramirez G, Lemley M, Shanbhag A, Kwiecinski J, Miller RJH, Kavanagh PB, Liang JX, Dey D, Slipczuk L, Travin MI, Alexanderson E, Carvajal-Juarez I, Packard RRS, Al-Mallah M, Einstein AJ, Feher A, Acampa W, Knight S, Le VT, Mason S, Sanghani R, Wopperer S, Chareonthaitawee P, Buechel RR, Rosamond TL, deKemp RA, Berman DS, Di Carli MF, Slomka PJ

pubmed logopapersAug 5 2025
The REgistry of Flow and Perfusion Imaging for Artificial Intelligence with PET (REFINE PET) was established to collect multicenter PET and associated computed tomography (CT) images, together with clinical data and outcomes, into a comprehensive research resource. REFINE PET will enable validation and development of both standard and novel cardiac PET/CT processing methods. REFINE PET is a multicenter, international registry that contains both clinical and imaging data. The PET scans were processed using QPET software (Cedars-Sinai Medical Center, Los Angeles, CA), while the CT scans were processed using deep learning (DL) to detect coronary artery calcium (CAC). Patients were followed up for the occurrence of major adverse cardiovascular events (MACE), which include death, myocardial infarction, unstable angina, and late revascularization (>90 days from PET). The REFINE PET registry currently contains data for 35,588 patients from 14 sites, with additional patient data and sites anticipated. Comprehensive clinical data (including demographics, medical history, and stress test results) were integrated with more than 2200 imaging variables across 42 categories. The registry is poised to address a broad range of clinical questions, supported by correlating invasive angiography (within 6 months of MPI) in 5972 patients and a total of 9252 major adverse cardiovascular events during a median follow-up of 4.2 years. The REFINE PET registry leverages the integration of clinical, multimodality imaging, and novel quantitative and AI tools to advance the role of PET/CT MPI in diagnosis and risk stratification.

Incorporating Artificial Intelligence into Fracture Risk Assessment: Using Clinical Imaging to Predict the Unpredictable.

Kong SH

pubmed logopapersAug 4 2025
Artificial intelligence (AI) is increasingly being explored as a complementary tool to traditional fracture risk assessment methods. Conventional approaches, such as bone mineral density measurement and established clinical risk calculators, provide populationlevel stratification but often fail to capture the structural nuances of bone fragility. Recent advances in AI-particularly deep learning techniques applied to imaging-enable opportunistic screening and individualized risk estimation using routinely acquired radiographs and computed tomography (CT) data. These models demonstrate improved discrimination for osteoporotic fracture detection and risk prediction, supporting applications such as time-to-event modeling and short-term prognosis. CT- and radiograph-based models have shown superiority over conventional metrics in diverse cohorts, while innovations like multitask learning and survival plots contribute to enhanced interpretability and patient-centered communication. Nevertheless, challenges related to model generalizability, data bias, and automation bias persist. Successful clinical integration will require rigorous external validation, transparent reporting, and seamless embedding into electronic medical systems. This review summarizes recent advances in AI-driven fracture assessment, critically evaluates their clinical promise, and outlines a roadmap for translation into real-world practice.

Diagnostic Performance of Imaging-Based Artificial Intelligence Models for Preoperative Detection of Cervical Lymph Node Metastasis in Clinically Node-Negative Papillary Thyroid Carcinoma: A Systematic Review and Meta-Analysis.

Li B, Cheng G, Mo Y, Dai J, Cheng S, Gong S, Li H, Liu Y

pubmed logopapersAug 4 2025
This systematic review and meta-analysis evaluated the performance of imaging-based artificial intelligence (AI) models in diagnosing preoperative cervical lymph node metastasis (LNM) in clinically node-negative (cN0) papillary thyroid carcinoma (PTC). We conducted a literature search in PubMed, Embase, and Web of Science until February 25, 2025. Studies were selected that focused on imaging-based AI models for predicting cervical LNM in cN0 PTC. The diagnostic performance metrics were analyzed using a bivariate random-effects model, and study quality was assessed with the QUADAS-2 tool. From 671 articles, 11 studies involving 3366 patients were included. Ultrasound (US)-based AI models showed pooled sensitivity of 0.79 and specificity of 0.82, significantly higher than radiologists (p < 0.001). CT-based AI models demonstrated sensitivity of 0.78 and specificity of 0.89. Imaging-based AI models, particularly US-based AI, show promising diagnostic performance. There is a need for further multicenter prospective studies for validation. PROSPERO: (CRD420251063416).

Natural language processing evaluation of trends in cervical cancer incidence in radiology reports: A ten-year survey.

López-Úbeda P, Martín-Noguerol T, Luna A

pubmed logopapersAug 4 2025
Cervical cancer commonly associated with human papillomavirus (HPV) infection, remains the fourth most common cancer in women globally. This study aims to develop and evaluate a Natural Language Processing (NLP) system to identify and analyze cervical cancer incidence trends from 2013 to 2023 at our institution, focusing on age-specific variations and evaluating the possible impact of HPV vaccination. This retrospective cohort study, we analyzed unstructured radiology reports collected between 2013 and 2023, comprising 433,207 studies involving 250,181 women who underwent CT, MRI, or ultrasound scans of the abdominopelvic region. A rule-based NLP system was developed to extract references to cervical cancer from these reports and validated against a set of 200 manually annotated cases reviewed by an experienced radiologist. The NLP system demonstrated excellent performance, achieving an accuracy of over 99.5 %. This high reliability enabled its application in a large-scale population study. Results show that the women under 30 maintain a consistently low cervical cancer incidence, likely reflecting early HPV vaccination impact. The 30-40 cohorts declined until 2020, followed by a slight increase, while the 40-60 groups exhibited an overall downward trend with fluctuations, suggesting long-term vaccine effects. Incidence in patients over 60 also declined, though with greater variability, possibly due to other risk factors. The developed NLP system effectively identified cervical cancer cases from unstructured radiology reports, facilitating an accurate analysis of the impact of HPV vaccination on cervical cancer prevalence and imaging study requirements. This approach demonstrates the potential of AI and NLP tools in enhancing data accuracy and efficiency in medical epidemiology research. NLP-based approaches can significantly improve the collection and analysis of epidemiological data on cervical cancer, supporting the development of more targeted and personalized prevention strategies-particularly in populations with heterogeneous HPV vaccination coverage.

Artificial intelligence: a new era in prostate cancer diagnosis and treatment.

Vidiyala N, Parupathi P, Sunkishala P, Sree C, Gujja A, Kanagala P, Meduri SK, Nyavanandi D

pubmed logopapersAug 4 2025
Prostate cancer (PCa) represents one of the most prevalent cancers among men, with substantial challenges in timely and accurate diagnosis and subsequent treatment. Traditional diagnosis and treatment methods for PCa, such as prostate-specific antigen (PSA) biomarker detection, digital rectal examination, imaging (CT/MRI) analysis, and biopsy histopathological examination, suffer from limitations such as a lack of specificity, generation of false positives or negatives, and difficulty in handling large data, leading to overdiagnosis and overtreatment. The integration of artificial intelligence (AI) in PCa diagnosis and treatment is revolutionizing traditional approaches by offering advanced tools for early detection, personalized treatment planning, and patient management. AI technologies, especially machine learning and deep learning, improve diagnostic accuracy and treatment planning. The AI algorithms analyze imaging data, like MRI and ultrasound, to identify cancerous lesions effectively with great precision. In addition, AI algorithms enhance risk assessment and prognosis by combining clinical, genomic, and imaging data. This leads to more tailored treatment strategies, enabling informed decisions about active surveillance, surgery, or new therapies, thereby improving quality of life while reducing unnecessary diagnoses and treatments. This review examines current AI applications in PCa care, focusing on their transformative impact on diagnosis and treatment planning while recognizing potential challenges. It also outlines expected improvements in diagnosis through AI-integrated systems and decision support tools for healthcare teams. The findings highlight AI's potential to enhance clinical outcomes, operational efficiency, and patient-centred care in managing PCa.

Enhanced detection of ovarian cancer using AI-optimized 3D CNNs for PET/CT scan analysis.

Sadeghi MH, Sina S, Faghihi R, Alavi M, Giammarile F, Omidi H

pubmed logopapersAug 4 2025
This study investigates how deep learning (DL) can enhance ovarian cancer diagnosis and staging using large imaging datasets. Specifically, we compare six conventional convolutional neural network (CNN) architectures-ResNet, DenseNet, GoogLeNet, U-Net, VGG, and AlexNet-with OCDA-Net, an enhanced model designed for [<sup>18</sup>F]FDG PET image analysis. The OCDA-Net, an advancement on the ResNet architecture, was thoroughly compared using randomly split datasets of training (80%), validation (10%), and test (10%) images. Trained over 100 epochs, OCDA-Net achieved superior diagnostic classification with an accuracy of 92%, and staging results of 94%, supported by robust precision, recall, and F-measure metrics. Grad-CAM ++ heat-maps confirmed that the network attends to hyper-metabolic lesions, supporting clinical interpretability. Our findings show that OCDA-Net outperforms existing CNN models and has strong potential to transform ovarian cancer diagnosis and staging. The study suggests that implementing these DL models in clinical practice could ultimately improve patient prognoses. Future research should expand datasets, enhance model interpretability, and validate these models in clinical settings.

Early prediction of proton therapy dose distributions and DVHs for hepatocellular carcinoma using contour-based CNN models from diagnostic CT and MRI.

Rachi T, Tochinai T

pubmed logopapersAug 4 2025
Proton therapy is commonly used for treating hepatocellular carcinoma (HCC); however, its feasibility can be challenging to assess in large tumors or those adjacent to critical organs at risk (OARs), which are typically assessed only after planning computed tomography (CT) acquisition. This study aimed to predict proton dose distributions using diagnostic CT (dCT) and diagnostic MRI (dMRI) with a convolutional neural network (CNN), enabling early treatment feasibility assessments. Dose distributions and dose-volume histograms (DVHs) were calculated for 118 patients with HCC using intensity-modulated proton therapy (IMPT) and passive proton therapy. A CPU-based CNN model was used to predict DVHs and 3D dose distributions from diagnostic images. Prediction accuracy was evaluated using mean absolute error (MAE), mean squared error (MSE), peak signal-to-noise ratio (PSNR), structural similarity index (SSIM), and gamma passing rate with a 3 mm/3% criterion. The predicted DVHs and dose distributions showed high agreement with actual values. MAE remained below 3.0%, with passive techniques achieving 1.2-1.8%. MSE was below 0.004 in all cases. PSNR ranged from 24 to 28 dB, and SSIM exceeded 0.94 in most conditions. Gamma passing rates averaged 82-83% for IMPT and 92-93% for passive techniques. The model achieved comparable accuracy when using dMRI and dCT. This study demonstrates that early dose distribution prediction from diagnostic imaging is feasible and accurate using a lightweight CNN model. Despite anatomical variability between diagnostic and planning images, this approach provides timely insights into treatment feasibility, potentially supporting insurance pre-authorization, reducing unnecessary imaging, and optimizing clinical workflows for HCC proton therapy.

Do Edges Matter? Investigating Edge-Enhanced Pre-Training for Medical Image Segmentation

Paul Zaha, Lars Böcking, Simeon Allmendinger, Leopold Müller, Niklas Kühl

arxiv logopreprintAug 4 2025
Medical image segmentation is crucial for disease diagnosis and treatment planning, yet developing robust segmentation models often requires substantial computational resources and large datasets. Existing research shows that pre-trained and finetuned foundation models can boost segmentation performance. However, questions remain about how particular image preprocessing steps may influence segmentation performance across different medical imaging modalities. In particular, edges-abrupt transitions in pixel intensity-are widely acknowledged as vital cues for object boundaries but have not been systematically examined in the pre-training of foundation models. We address this gap by investigating to which extend pre-training with data processed using computationally efficient edge kernels, such as kirsch, can improve cross-modality segmentation capabilities of a foundation model. Two versions of a foundation model are first trained on either raw or edge-enhanced data across multiple medical imaging modalities, then finetuned on selected raw subsets tailored to specific medical modalities. After systematic investigation using the medical domains Dermoscopy, Fundus, Mammography, Microscopy, OCT, US, and XRay, we discover both increased and reduced segmentation performance across modalities using edge-focused pre-training, indicating the need for a selective application of this approach. To guide such selective applications, we propose a meta-learning strategy. It uses standard deviation and image entropy of the raw image to choose between a model pre-trained on edge-enhanced or on raw data for optimal performance. Our experiments show that integrating this meta-learning layer yields an overall segmentation performance improvement across diverse medical imaging tasks by 16.42% compared to models pre-trained on edge-enhanced data only and 19.30% compared to models pre-trained on raw data only.

Joint Lossless Compression and Steganography for Medical Images via Large Language Models

Pengcheng Zheng, Xiaorong Pu, Kecheng Chen, Jiaxin Huang, Meng Yang, Bai Feng, Yazhou Ren, Jianan Jiang

arxiv logopreprintAug 3 2025
Recently, large language models (LLMs) have driven promis ing progress in lossless image compression. However, di rectly adopting existing paradigms for medical images suf fers from an unsatisfactory trade-off between compression performance and efficiency. Moreover, existing LLM-based compressors often overlook the security of the compres sion process, which is critical in modern medical scenarios. To this end, we propose a novel joint lossless compression and steganography framework. Inspired by bit plane slicing (BPS), we find it feasible to securely embed privacy messages into medical images in an invisible manner. Based on this in sight, an adaptive modalities decomposition strategy is first devised to partition the entire image into two segments, pro viding global and local modalities for subsequent dual-path lossless compression. During this dual-path stage, we inno vatively propose a segmented message steganography algo rithm within the local modality path to ensure the security of the compression process. Coupled with the proposed anatom ical priors-based low-rank adaptation (A-LoRA) fine-tuning strategy, extensive experimental results demonstrate the su periority of our proposed method in terms of compression ra tios, efficiency, and security. The source code will be made publicly available.

TopoImages: Incorporating Local Topology Encoding into Deep Learning Models for Medical Image Classification

Pengfei Gu, Hongxiao Wang, Yejia Zhang, Huimin Li, Chaoli Wang, Danny Chen

arxiv logopreprintAug 3 2025
Topological structures in image data, such as connected components and loops, play a crucial role in understanding image content (e.g., biomedical objects). % Despite remarkable successes of numerous image processing methods that rely on appearance information, these methods often lack sensitivity to topological structures when used in general deep learning (DL) frameworks. % In this paper, we introduce a new general approach, called TopoImages (for Topology Images), which computes a new representation of input images by encoding local topology of patches. % In TopoImages, we leverage persistent homology (PH) to encode geometric and topological features inherent in image patches. % Our main objective is to capture topological information in local patches of an input image into a vectorized form. % Specifically, we first compute persistence diagrams (PDs) of the patches, % and then vectorize and arrange these PDs into long vectors for pixels of the patches. % The resulting multi-channel image-form representation is called a TopoImage. % TopoImages offers a new perspective for data analysis. % To garner diverse and significant topological features in image data and ensure a more comprehensive and enriched representation, we further generate multiple TopoImages of the input image using various filtration functions, which we call multi-view TopoImages. % The multi-view TopoImages are fused with the input image for DL-based classification, with considerable improvement. % Our TopoImages approach is highly versatile and can be seamlessly integrated into common DL frameworks. Experiments on three public medical image classification datasets demonstrate noticeably improved accuracy over state-of-the-art methods.
Page 10 of 71706 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.