Sort by:
Page 238 of 6576562 results

Akindele RG, Adebayo S, Yu M, Kanda PS

pubmed logopapersAug 22 2025
Alzheimer's disease (AD) is a progressive neurodegenerative disorder with increasing prevalence among the ageing population, necessitating early and accurate diagnosis for effective disease management. In this study, we present a novel hybrid deep learning framework, AlzhiNet, that integrates both 2D convolutional neural networks (2D-CNNs) and 3D convolutional neural networks (3D-CNNs), along with a custom loss function and volumetric data augmentation, to enhance feature extraction and improve classification performance in AD diagnosis. According to extensive experiments, AlzhiNet outperforms standalone 2D and 3D models, highlighting the importance of combining these complementary representations of data. The depth and quality of 3D volumes derived from the augmented 2D slices also significantly influence the model's performance. The results indicate that carefully selecting weighting factors in hybrid predictions is imperative for achieving optimal results. Our framework has been validated on the magnetic resonance imaging (MRI) from Kaggle and MIRIAD datasets, obtaining accuracies of 98.9% and 99.99%, respectively, with an AUC of 100%. Furthermore, AlzhiNet was studied under a variety of perturbation scenarios on the Alzheimer's Kaggle dataset, including Gaussian noise, brightness, contrast, salt and pepper noise, color jitter, and occlusion. The results obtained show that AlzhiNet is more robust to perturbations than ResNet-18, making it an excellent choice for real-world applications. This approach represents a promising advancement in the early diagnosis and treatment planning for AD.

De Rubeis G, Stasolla A, Piccoli C, Federici M, Cozzolino V, Lovullo G, Leone E, Pesapane F, Fabiano S, Bertaccini L, Pingi A, Galluzzo M, Saba L, Pampana E

pubmed logopapersAug 22 2025
According to guideline the computed tomography perfusion (CTP) should read and analysis using computer-aided software. This study evaluates the efficacy of AI/ML (machine learning) -driven software in CTP imaging and the effect of neuroradiologists interpretation of these automated results. We conducted a retrospective, single-center cohort study from June to December 2023 at a comprehensive stroke center. A total of 132 patients suspected of acute ischemic stroke underwent CTP using. The AI software RAPID.AI was utilized for initial analysis, with subsequent validation and adjustments made by experienced neuroradiologists. The rate of CTP marked as "non reportable", "reportable" and "reportable with correction" by neuroradiologist was recorded. The degree of confidence in the report of basal and angio-CT scan was assessed before and after CTP visualization. Statistical analysis included logistic regression and F1 score assessments to evaluate the predictive accuracy of AI-generated CTP maps RESULTS: The study found that CTP maps derived from AI software were reportable in 65.2% of cases without artifacts, improved to 87.9% reportable cases when reviewed by neuroradiologists. Key predictive factors for artifact-free CTP maps included motion parameters and the timing of contrast peak distances. There was a significant shift to higher confidence scores of the angiographic phase of the CT after the result of CTP CONCLUSIONS: Neuroradiologists play an indispensable role in enhancing the reliability of CTP imaging by interpreting and correcting AI-processed maps. CTP=computed tomography perfusion; AI/ML= Artificial Intelligence/Machine Learning; LVO = Large vessel occlusion.

Qian YF, Zhou JJ, Shi SL, Guo WL

pubmed logopapersAug 22 2025
The objective of this study was to identify risk factors for enema reduction failure and to establish a combined model that integrates deep learning (DL) features and clinical features for predicting surgical intervention in intussusception in children younger than 8 months of age. A retrospective study with a prospective validation cohort of intussusception. The retrospective data were collected from two hospitals in south east China between January 2017 and December 2022. The prospective data were collected between January 2023 and July 2024. A total of 415 intussusception cases in patients younger than 8 months were included in the study. 280 cases collected from Centre 1 were randomly divided into two groups at a 7:3 ratio: the training cohort (n=196) and the internal validation cohort (n=84). 85 cases collected from Centre 2 were designed as external validation cohort. Pretrained DL networks were used to extract deep transfer learning features, with least absolute shrinkage and selection operator regression selecting the non-zero coefficient features. The clinical features were screened by univariate and multivariate logistic regression analyses. We constructed a combined model that integrated the selected two types of features, along with individual clinical and DL models for comparison. Additionally, the combined model was validated in a prospective cohort (n=50) collected from Centre 1. In the internal and external validation cohorts, the combined model (area under curve (AUC): 0.911 and 0.871, respectively) demonstrated better performance for predicting surgical intervention in intussusception in children younger than 8 months of age than the clinical model (AUC: 0.776 and 0.740, respectively) and the DL model (AUC: 0.828 and 0.793, respectively). In the prospective validation cohort, the combined model also demonstrated impressive performance with an AUC of 0.890. The combined model, integrating DL and clinical features, demonstrated stable predictive accuracy, suggesting its potential for improving clinical therapeutic strategies for intussusception.

Zhao Z, Hu Y, Xu LX, Sun J

pubmed logopapersAug 22 2025
Image-guided tumor ablation (IGTA) has revolutionized modern oncological treatments by providing minimally invasive options that ensure precise tumor eradication with minimal patient discomfort. Traditional techniques such as ultrasound (US), Computed Tomography (CT), and Magnetic Resonance Imaging (MRI) have been instrumental in the planning, execution, and evaluation of ablation therapies. However, these methods often face limitations, including poor contrast, susceptibility to artifacts, and variability in operator expertise, which can undermine the accuracy of tumor targeting and therapeutic outcomes. Incorporating deep learning (DL) into IGTA represents a significant advancement that addresses these challenges. This review explores the role and potential of DL in different phases of tumor ablation therapy: preoperative, intraoperative, and postoperative. In the preoperative stage, DL excels in advanced image segmentation, enhancement, and synthesis, facilitating precise surgical planning and optimized treatment strategies. During the intraoperative phase, DL supports image registration and fusion, and real-time surgical planning, enhancing navigation accuracy and ensuring precise ablation while safeguarding surrounding healthy tissues. In the postoperative phase, DL is pivotal in automating the monitoring of treatment responses and in the early detection of recurrences through detailed analyses of follow-up imaging. This review highlights the essential role of deep learning in modernizing IGTA, showcasing its significant implications for procedural safety, efficacy, and patient outcomes in oncology. As deep learning technologies continue to evolve, they are poised to redefine the standards of care in tumor ablation therapies, making treatments more accurate, personalized, and patient-friendly.

Li H, Zhang Y, Mei H, Yuan Y, Wang L, Liu W, Zeng H, Huang J, Chai X, Wu K, Liu H

pubmed logopapersAug 22 2025
Placenta accreta spectrum (PAS) is a serious perinatal complication. Accurate preoperative identification of patients at high risk for adverse clinical outcomes is essential for developing personalized treatment strategies. This study aimed to develop and validate a high-performance, interpretable machine learning model that integrates MRI morphological indicators and clinical features to predict adverse outcomes in PAS, and to build an online prediction tool to enhance its clinical applicability. This retrospective study included 125 clinically confirmed PAS patients from two centers, categorized into high-risk (intraoperative blood loss over 1500 mL or requiring hysterectomy) and low-risk groups. Data from Center 1 were used for model development, and data from Center 2 served as the external validation set. Five MRI morphological indicators and six clinical features were extracted as model inputs. Three machine learning classifiers-AdaBoost, TabPFN, and CatBoost-were trained and evaluated on both internal testing and external validation cohorts. SHAP analysis was used to interpret model decision-making, and the optimal model was deployed via a Streamlit-based web platform. The CatBoost model achieved the best performance, with AUROCs of 0.90 (95% CI: 0.73-0.99) and 0.84 (95% CI: 0.70-0.97) in the internal testing and external validation sets, respectively. Calibration curves indicated strong agreement between predicted and actual risks. SHAP analysis revealed that "Cervical canal length" and "Gestational age" contributed negatively to high-risk predictions, while "Prior C-sections number", "Placental abnormal vasculature area", and Parturition were positively associated. The final online tool allows real-time risk prediction and visualization of individualized force plots and is freely accessible to clinicians and patients. This study successfully developed an interpretable and practical machine learning model for predicting adverse clinical outcomes in PAS. The accompanying online tool may support clinical decision-making and improve individualized management for PAS patients.

Sadoune S, Richard A, Talbot F, Guyet T, Boussel L, Berry H

pubmed logopapersAug 22 2025
Correct automatic analysis of a medical report requires the identification of negations and their scopes. Since most of available training data comes from medical texts in English, it usually takes additional work to apply to non-English languages. Here, we introduce a supervised learning method for automatically identifying and determining the scopes and negation cues in French medical reports using language models based on BERT. Using a new private corpus of French-language chest CT scan reports with consistent annotation, we first fine-tuned five available transformer models on the negation cue and scope identification task. Subsequently, we extended the methodology by modifying the optimal model to encompass a wider range of clinical notes and reports (not limited to radiology reports) and more heterogeneous annotations. Lastly, we tested the generated model on its initial mask-filling task to ensure there is no catastrophic forgetting. On a corpus of thoracic CT scan reports annotated by four annotators within our team, our method reaches a F1-score of 99.4% for cue detection and 94.5% for scope detection, thus equaling or improving state-of-the art performance. On more generic biomedical reports, annotated with more heterogeneous rules, the quality of the automatic analysis of course decreases, but our best-of-the class model still delivers very good performance, with F1-scores of 98.2% (cue detection), and 90.9% (scope detection). Moreover, we show that fine-tuning the original model for the negation identification task preserves or even improves its performance on its initial fill-mask task, depending on the lemmatization. Considering the performance of our fine-tuned model for the detection of negation cues and scopes in medical reports in French and its robustness with respect to the diversity of the annotation rules and the type of biomedical data, we conclude that it is suited for use in a real-life clinical context.

Hafeez Ur Rehman, Sumaiya Fazal, Moutaz Alazab, Ali Baydoun

arxiv logopreprintAug 22 2025
Glioblastomas, constituting over 50% of malignant brain tumors, are highly aggressive brain tumors that pose substantial treatment challenges due to their rapid progression and resistance to standard therapies. The methylation status of the O-6-Methylguanine-DNA Methyltransferase (MGMT) gene is a critical biomarker for predicting patient response to treatment, particularly with the alkylating agent temozolomide. However, accurately predicting MGMT methylation status using non-invasive imaging techniques remains challenging due to the complex and heterogeneous nature of glioblastomas, that includes, uneven contrast, variability within lesions, and irregular enhancement patterns. This study introduces the Convolutional Autoencoders for MGMT Methylation Status Prediction (CAMP) framework, which is based on adaptive sparse penalties to enhance predictive accuracy. The CAMP framework operates in two phases: first, generating synthetic MRI slices through a tailored autoencoder that effectively captures and preserves intricate tissue and tumor structures across different MRI modalities; second, predicting MGMT methylation status using a convolutional neural network enhanced by adaptive sparse penalties. The adaptive sparse penalty dynamically adjusts to variations in the data, such as contrast differences and tumor locations in MR images. Our method excels in MRI image synthesis, preserving brain tissue, fat, and individual tumor structures across all MRI modalities. Validated on benchmark datasets, CAMP achieved an accuracy of 0.97, specificity of 0.98, and sensitivity of 0.97, significantly outperforming existing methods. These results demonstrate the potential of the CAMP framework to improve the interpretation of MRI data and contribute to more personalized treatment strategies for glioblastoma patients.

Voss A, Suoranta S, Nissinen T, Hurskainen O, Masarwah A, Sund R, Tohka J, Väänänen SP

pubmed logopapersAug 22 2025
Abdominal aortic calcification (AAC) is an independent predictor of cardiovascular diseases (CVDs). AAC is typically detected as an incidental finding in spine scans. Early detection of AAC through opportunistic screening using any available imaging modalities could help identify individuals with a higher risk of developing clinical CVDs. However, AAC is not routinely assessed in clinics, and manual scoring from projection images is time-consuming and prone to inter-rater variability. Also, automated AAC scoring methods exist, but earlier methods have not accounted for the inherent variability in AAC scoring and were developed for a single imaging modality at a time. We propose an automated method for quantifying AAC from lumbar spine X-ray and Dual-energy X-ray Absorptiometry (DXA) images using an ensemble of convolutional neural network models that predicts a distribution of probable AAC scores. We treat AAC score as a normally distributed random variable to account for the variability of manual scoring. The mean and variance of the assumed normal AAC distributions are estimated based on manual annotations, and the models in the ensemble are trained by simulating AAC scores from these distributions. Our proposed ensemble approach successfully extracted AAC scores from both X-ray and DXA images with predicted score distributions demonstrating strong agreement with manual annotations, as evidenced by concordance correlation coefficients of 0.930 for X-ray and 0.912 for DXA. The prediction error between the average estimates of our approach and the average manual annotations was lower than the errors reported previously, highlighting the benefit of incorporating uncertainty in AAC scoring.

Yuhui Tao, Zhongwei Zhao, Zilong Wang, Xufang Luo, Feng Chen, Kang Wang, Chuanfu Wu, Xue Zhang, Shaoting Zhang, Jiaxi Yao, Xingwei Jin, Xinyang Jiang, Yifan Yang, Dongsheng Li, Lili Qiu, Zhiqiang Shao, Jianming Guo, Nengwang Yu, Shuo Wang, Ying Xiong

arxiv logopreprintAug 22 2025
The non-invasive assessment of increasingly incidentally discovered renal masses is a critical challenge in urologic oncology, where diagnostic uncertainty frequently leads to the overtreatment of benign or indolent tumors. In this study, we developed and validated RenalCLIP using a dataset of 27,866 CT scans from 8,809 patients across nine Chinese medical centers and the public TCIA cohort, a visual-language foundation model for characterization, diagnosis and prognosis of renal mass. The model was developed via a two-stage pre-training strategy that first enhances the image and text encoders with domain-specific knowledge before aligning them through a contrastive learning objective, to create robust representations for superior generalization and diagnostic precision. RenalCLIP achieved better performance and superior generalizability across 10 core tasks spanning the full clinical workflow of kidney cancer, including anatomical assessment, diagnostic classification, and survival prediction, compared with other state-of-the-art general-purpose CT foundation models. Especially, for complicated task like recurrence-free survival prediction in the TCIA cohort, RenalCLIP achieved a C-index of 0.726, representing a substantial improvement of approximately 20% over the leading baselines. Furthermore, RenalCLIP's pre-training imparted remarkable data efficiency; in the diagnostic classification task, it only needs 20% training data to achieve the peak performance of all baseline models even after they were fully fine-tuned on 100% of the data. Additionally, it achieved superior performance in report generation, image-text retrieval and zero-shot diagnosis tasks. Our findings establish that RenalCLIP provides a robust tool with the potential to enhance diagnostic accuracy, refine prognostic stratification, and personalize the management of patients with kidney cancer.

Hélène Corbaz, Anh Nguyen, Victor Schulze-Zachau, Paul Friedrich, Alicia Durrer, Florentin Bieder, Philippe C. Cattin, Marios N Psychogios

arxiv logopreprintAug 22 2025
Patients undergoing a mechanical thrombectomy procedure usually have a multi-detector CT (MDCT) scan before and after the intervention. The image quality of the flat panel detector CT (FDCT) present in the intervention room is generally much lower than that of a MDCT due to significant artifacts. However, using only FDCT images could improve patient management as the patient would not need to be moved to the MDCT room. Several studies have evaluated the potential use of FDCT imaging alone and the time that could be saved by acquiring the images before and/or after the intervention only with the FDCT. This study proposes using a denoising diffusion probabilistic model (DDPM) to improve the image quality of FDCT scans, making them comparable to MDCT scans. Clinicans evaluated FDCT, MDCT, and our model's predictions for diagnostic purposes using a questionnaire. The DDPM eliminated most artifacts and improved anatomical visibility without reducing bleeding detection, provided that the input FDCT image quality is not too low. Our code can be found on github.
Page 238 of 6576562 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.