Sort by:
Page 50 of 58574 results

Ovarian Cancer Screening: Recommendations and Future Prospects.

Chiu S, Staley H, Jeevananthan P, Mascarenhas S, Fotopoulou C, Rockall A

pubmed logopapersMay 23 2025
Ovarian cancer remains a significant cause of mortality among women, largely due to challenges in early detection. Current screening strategies, including transvaginal ultrasound and CA125 testing, have limited sensitivity and specificity, particularly in asymptomatic women or those with early-stage disease. The European Society of Gynaecological Oncology, the European Society for Medical Oncology, the European Society of Pathology, and other health organizations currently do not recommend routine population-based screening for ovarian cancer due to the high rates of false-positives and the absence of a reliable early detection method.This review examines existing ovarian cancer screening guidelines and explores recent advances in diagnostic technologies including radiomics, artificial intelligence, point-of-care testing, and novel detection methods.Emerging technologies show promise with respect to improving ovarian cancer detection by enhancing sensitivity and specificity compared to traditional methods. Artificial intelligence and radiomics have potential for revolutionizing ovarian cancer screening by identifying subtle diagnostic patterns, while liquid biopsy-based approaches and cell-free DNA profiling enable tumor-specific biomarker detection. Minimally invasive methods, such as intrauterine lavage and salivary diagnostics, provide avenues for population-wide applicability. However, large-scale validation is required to establish these techniques as effective and reliable screening options. · Current ovarian cancer screening methods lack sensitivity and specificity for early-stage detection.. · Emerging technologies like artificial intelligence, radiomics, and liquid biopsy offer improved diagnostic accuracy.. · Large-scale clinical validation is required, particularly for baseline-risk populations.. · Chiu S, Staley H, Jeevananthan P et al. Ovarian Cancer Screening: Recommendations and Future Prospects. Rofo 2025; DOI 10.1055/a-2589-5696.

Multimodal fusion model for prognostic prediction and radiotherapy response assessment in head and neck squamous cell carcinoma.

Tian R, Hou F, Zhang H, Yu G, Yang P, Li J, Yuan T, Chen X, Chen Y, Hao Y, Yao Y, Zhao H, Yu P, Fang H, Song L, Li A, Liu Z, Lv H, Yu D, Cheng H, Mao N, Song X

pubmed logopapersMay 23 2025
Accurate prediction of prognosis and postoperative radiotherapy response is critical for personalized treatment in head and neck squamous cell carcinoma (HNSCC). We developed a multimodal deep learning model (MDLM) integrating computed tomography, whole-slide images, and clinical features from 1087 HNSCC patients across multiple centers. The MDLM exhibited good performance in predicting overall survival (OS) and disease-free survival in external test cohorts. Additionally, the MDLM outperformed unimodal models. Patients with a high-risk score who underwent postoperative radiotherapy exhibited prolonged OS compared to those who did not (P = 0.016), whereas no significant improvement in OS was observed among patients with a low-risk score (P = 0.898). Biological exploration indicated that the model may be related to changes in the cytochrome P450 metabolic pathway, tumor microenvironment, and myeloid-derived cell subpopulations. Overall, the MDLM effectively predicts prognosis and postoperative radiotherapy response, offering a promising tool for personalized HNSCC therapy.

Automated Detection of Severe Cerebral Edema Using Explainable Deep Transfer Learning after Hypoxic Ischemic Brain Injury.

Wang Z, Kulpanowski AM, Copen WA, Rosenthal ES, Dodelson JA, McCrory DE, Edlow BL, Kimberly WT, Amorim E, Westover M, Ning M, Zabihi M, Schaefer PW, Malhotra R, Giacino JT, Greer DM, Wu O

pubmed logopapersMay 23 2025
Substantial gaps exist in the neuroprognostication of cardiac arrest patients who remain comatose after the restoration of spontaneous circulation. Most studies focus on predicting survival, a measure confounded by the withdrawal of life-sustaining treatment decisions. Severe cerebral edema (SCE) may serve as an objective proximal imaging-based surrogate of neurologic injury. We retrospectively analyzed data from 288 patients to automate SCE detection with machine learning (ML) and to test the hypothesis that the quantitative values produced by these algorithms (ML_SCE) can improve predictions of neurologic outcomes. Ground-truth SCE (GT_SCE) classification was based on radiology reports. The model attained a cross-validated testing accuracy of 87% [95% CI: 84%, 89%] for detecting SCE. Attention maps explaining SCE classification focused on cisternal regions (p<0.05). Multivariable analyses showed that older age (p<0.001), non-shockable initial cardiac rhythm (p=0.004), and greater ML_SCE values (p<0.001) were significant predictors of poor neurologic outcomes, with GT_SCE (p=0.064) as a non-significant covariate. Our results support the feasibility of automated SCE detection. Future prospective studies with standardized neurologic assessments are needed to substantiate the utility of quantitative ML_SCE values to improve neuroprognostication.

Artificial Intelligence enhanced R1 maps can improve lesion detection in focal epilepsy in children

Doumou, G., D'Arco, F., Figini, M., Lin, H., Lorio, S., Piper, R., O'Muircheartaigh, J., Cross, H., Weiskopf, N., Alexander, D., Carmichael, D. W.

medrxiv logopreprintMay 23 2025
Background and purposeMRI is critical for the detection of subtle cortical pathology in epilepsy surgery assessment. This can be aided by improved MRI quality and resolution using ultra-high field (7T). But poor access and long scan durations limit widespread use, particularly in a paediatric setting. AI-based learning approaches may provide similar information by enhancing data obtained with conventional MRI (3T). We used a convolutional neural network trained on matched 3T and 7T images to enhance quantitative R1-maps (longitudinal relaxation rate) obtained at 3T in paediatric epilepsy patients and to determine their potential clinical value for lesion identification. Materials and MethodsA 3D U-Net was trained using paired patches from 3T and 7T R1-maps from n=10 healthy volunteers. The trained network was applied to enhance paediatric focal epilepsy 3T R1 images from a different scanner/site (n=17 MRI lesion positive / n=14 MR-negative). Radiological review assessed image quality, as well as lesion identification and visualization of enhanced maps in comparison to the 3T R1-maps without clinical information. Lesion appearance was then compared to 3D-FLAIR. ResultsAI enhanced R1 maps were superior in terms of image quality in comparison to the original 3T R1 maps, while preserving and enhancing the visibility of lesions. After exclusion of 5/31 patients (due to movement artefact or incomplete data), lesions were detected in AI Enhanced R1 maps for 14/15 (93%) MR-positive and 4/11 (36%) MR-negative patients. ConclusionAI enhanced R1 maps improved the visibility of lesions in MR positive patients, as well as providing higher sensitivity in the MR-negative group compared to either the original 3T R1-maps or 3D-FLAIR. This provides promising initial evidence that 3T quantitative maps can outperform conventional 3T imaging via enhancement by an AI model trained on 7T MRI data, without the need for pathology-specific information.

A Deep Learning Vision-Language Model for Diagnosing Pediatric Dental Diseases

Pham, T.

medrxiv logopreprintMay 22 2025
This study proposes a deep learning vision-language model for the automated diagnosis of pediatric dental diseases, with a focus on differentiating between caries and periapical infections. The model integrates visual features extracted from panoramic radiographs using methods of non-linear dynamics and textural encoding with textual descriptions generated by a large language model. These multimodal features are concatenated and used to train a 1D-CNN classifier. Experimental results demonstrate that the proposed model outperforms conventional convolutional neural networks and standalone language-based approaches, achieving high accuracy (90%), sensitivity (92%), precision (92%), and an AUC of 0.96. This work highlights the value of combining structured visual and textual representations in improving diagnostic accuracy and interpretability in dental radiology. The approach offers a promising direction for the development of context-aware, AI-assisted diagnostic tools in pediatric dental care.

Render-FM: A Foundation Model for Real-time Photorealistic Volumetric Rendering

Zhongpai Gao, Meng Zheng, Benjamin Planche, Anwesa Choudhuri, Terrence Chen, Ziyan Wu

arxiv logopreprintMay 22 2025
Volumetric rendering of Computed Tomography (CT) scans is crucial for visualizing complex 3D anatomical structures in medical imaging. Current high-fidelity approaches, especially neural rendering techniques, require time-consuming per-scene optimization, limiting clinical applicability due to computational demands and poor generalizability. We propose Render-FM, a novel foundation model for direct, real-time volumetric rendering of CT scans. Render-FM employs an encoder-decoder architecture that directly regresses 6D Gaussian Splatting (6DGS) parameters from CT volumes, eliminating per-scan optimization through large-scale pre-training on diverse medical data. By integrating robust feature extraction with the expressive power of 6DGS, our approach efficiently generates high-quality, real-time interactive 3D visualizations across diverse clinical CT data. Experiments demonstrate that Render-FM achieves visual fidelity comparable or superior to specialized per-scan methods while drastically reducing preparation time from nearly an hour to seconds for a single inference step. This advancement enables seamless integration into real-time surgical planning and diagnostic workflows. The project page is: https://gaozhongpai.github.io/renderfm/.

An Interpretable Deep Learning Approach for Autism Spectrum Disorder Detection in Children Using NASNet-Mobile.

K VRP, Hima Bindu C, Devi KRM

pubmed logopapersMay 22 2025
Autism spectrum disorder (ASD) is a multifaceted neurodevelopmental disorder featuring impaired social interactions and communication abilities engaging the individuals in a restrictive or repetitive behaviour. Though incurable early detection and intervention can reduce the severity of symptoms. Structural magnetic resonance imaging (sMRI) can improve diagnostic accuracy, facilitating early diagnosis to offer more tailored care. With the emergence of deep learning (DL), neuroimaging-based approaches for ASD diagnosis have been focused. However, many existing models lack interpretability of their decisions for diagnosis. The prime objective of this work is to perform ASD classification precisely and to interpret the classification process in a better way so as to discern the major features that are appropriate for the prediction of disorder. The proposed model employs neural architecture search network - mobile(NASNet-Mobile) model for ASD detection, which is integrated with an explainable artificial intelligence (XAI) technique called local interpretable model-agnostic explanations (LIME) for increased transparency of ASD classification. The model is trained on sMRI images of two age groups taken from autism brain imaging data exchange-I (ABIDE-I) dataset. The proposed model yielded accuracy of 0.9607, F1-score of 0.9614, specificity of 0.9774, sensitivity of 0.9451, negative predicted value (NPV) of 0.9429, positive predicted value (PPV) of 0.9783 and the diagnostic odds ratio of 745.59 for 2 to 11 years age group compared to 12 to 18 years group. These results are superior compared to other state of the art models Inception v3 and SqueezeNet.

CT-Agent: A Multimodal-LLM Agent for 3D CT Radiology Question Answering

Yuren Mao, Wenyi Xu, Yuyang Qin, Yunjun Gao

arxiv logopreprintMay 22 2025
Computed Tomography (CT) scan, which produces 3D volumetric medical data that can be viewed as hundreds of cross-sectional images (a.k.a. slices), provides detailed anatomical information for diagnosis. For radiologists, creating CT radiology reports is time-consuming and error-prone. A visual question answering (VQA) system that can answer radiologists' questions about some anatomical regions on the CT scan and even automatically generate a radiology report is urgently needed. However, existing VQA systems cannot adequately handle the CT radiology question answering (CTQA) task for: (1) anatomic complexity makes CT images difficult to understand; (2) spatial relationship across hundreds slices is difficult to capture. To address these issues, this paper proposes CT-Agent, a multimodal agentic framework for CTQA. CT-Agent adopts anatomically independent tools to break down the anatomic complexity; furthermore, it efficiently captures the across-slice spatial relationship with a global-local token compression strategy. Experimental results on two 3D chest CT datasets, CT-RATE and RadGenome-ChestCT, verify the superior performance of CT-Agent.

SD-MAD: Sign-Driven Few-shot Multi-Anomaly Detection in Medical Images

Kaiyu Guo, Tan Pan, Chen Jiang, Zijian Wang, Brian C. Lovell, Limei Han, Yuan Cheng, Mahsa Baktashmotlagh

arxiv logopreprintMay 22 2025
Medical anomaly detection (AD) is crucial for early clinical intervention, yet it faces challenges due to limited access to high-quality medical imaging data, caused by privacy concerns and data silos. Few-shot learning has emerged as a promising approach to alleviate these limitations by leveraging the large-scale prior knowledge embedded in vision-language models (VLMs). Recent advancements in few-shot medical AD have treated normal and abnormal cases as a one-class classification problem, often overlooking the distinction among multiple anomaly categories. Thus, in this paper, we propose a framework tailored for few-shot medical anomaly detection in the scenario where the identification of multiple anomaly categories is required. To capture the detailed radiological signs of medical anomaly categories, our framework incorporates diverse textual descriptions for each category generated by a Large-Language model, under the assumption that different anomalies in medical images may share common radiological signs in each category. Specifically, we introduce SD-MAD, a two-stage Sign-Driven few-shot Multi-Anomaly Detection framework: (i) Radiological signs are aligned with anomaly categories by amplifying inter-anomaly discrepancy; (ii) Aligned signs are selected further to mitigate the effect of the under-fitting and uncertain-sample issue caused by limited medical data, employing an automatic sign selection strategy at inference. Moreover, we propose three protocols to comprehensively quantify the performance of multi-anomaly detection. Extensive experiments illustrate the effectiveness of our method.

HealthiVert-GAN: A Novel Framework of Pseudo-Healthy Vertebral Image Synthesis for Interpretable Compression Fracture Grading.

Zhang Q, Chuang C, Zhang S, Zhao Z, Wang K, Xu J, Sun J

pubmed logopapersMay 22 2025
Osteoporotic vertebral compression fractures (OVCFs) are prevalent in the elderly population, typically assessed on computed tomography (CT) scans by evaluating vertebral height loss. This assessment helps determine the fracture's impact on spinal stability and the need for surgical intervention. However, the absence of pre-fracture CT scans and standardized vertebral references leads to measurement errors and inter-observer variability, while irregular compression patterns further challenge the precise grading of fracture severity. While deep learning methods have shown promise in aiding OVCFs screening, they often lack interpretability and sufficient sensitivity, limiting their clinical applicability. To address these challenges, we introduce a novel vertebra synthesis-height loss quantification-OVCFs grading framework. Our proposed model, HealthiVert-GAN, utilizes a coarse-to-fine synthesis network designed to generate pseudo-healthy vertebral images that simulate the pre-fracture state of fractured vertebrae. This model integrates three auxiliary modules that leverage the morphology and height information of adjacent healthy vertebrae to ensure anatomical consistency. Additionally, we introduce the Relative Height Loss of Vertebrae (RHLV) as a quantification metric, which divides each vertebra into three sections to measure height loss between pre-fracture and post-fracture states, followed by fracture severity classification using a Support Vector Machine (SVM). Our approach achieves state-of-the-art classification performance on both the Verse2019 dataset and in-house dataset, and it provides cross-sectional distribution maps of vertebral height loss. This practical tool enhances diagnostic accuracy in clinical settings and assisting in surgical decision-making.
Page 50 of 58574 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.