Sort by:
Page 102 of 3993982 results

An end-to-end interpretable machine-learning-based framework for early-stage diagnosis of gallbladder cancer using multi-modality medical data.

Zhao H, Miao C, Zhu Y, Shu Y, Wu X, Yin Z, Deng X, Gong W, Yang Z, Zou W

pubmed logopapersJul 16 2025
The accurate early-stage diagnosis of gallbladder cancer (GBC) is regarded as one of the major challenges in the field of oncology. However, few studies have focused on the comprehensive classification of GBC based on multiple modalities. This study aims to develop a comprehensive diagnostic framework for GBC based on both imaging and non-imaging medical data. This retrospective study reviewed 298 clinical patients with gallbladder disease or volunteers from two devices. A novel end-to-end interpretable diagnostic framework for GBC is proposed to handle multiple medical modalities, including CT imaging, demographics, tumor markers, coagulation function tests, and routine blood tests. To achieve better feature extraction and fusion of the imaging modality, a novel global-hybrid-local network, namely GHL-Net, has also been developed. The ensemble learning strategy is employed to fuse multi-modality data and obtain the final classification result. In addition, two interpretable methods are applied to help clinicians understand the model-based decisions. Model performance was evaluated through accuracy, precision, specificity, sensitivity, F1-score, area under the curve (AUC), and matthews correlation coefficient (MCC). In both binary and multi-class classification scenarios, the proposed method showed better performance compared to other comparison methods in both datasets. Especially in the binary classification scenario, the proposed method achieved the highest accuracy, sensitivity, specificity, precision, F1-score, ROC-AUC, PR-AUC, and MCC of 95.24%, 93.55%, 96.87%, 96.67%, 95.08%, 0.9591, 0.9636, and 0.9051, respectively. The visualization results obtained based on the interpretable methods also demonstrated a high clinical relevance of the intermediate decision-making processes. Ablation studies then provided an in-depth understanding of our methodology. The machine learning-based framework can effectively improve the accuracy of GBC diagnosis and is expected to have a more significant impact in other cancer diagnosis scenarios.

Utilizing machine learning to predict MRI signal outputs from iron oxide nanoparticles through the PSLG algorithm.

Hataminia F, Azinfar A

pubmed logopapersJul 16 2025
In this research, we predict the output signal generated by iron oxide-based nanoparticles in Magnetic Resonance Imaging (MRI) using the physical properties of the nanoparticles and the MRI machine. The parameters considered include the size of the magnetic core of the nanoparticles, their magnetic saturation (Ms), the concentration of the nanoparticles (C), and the magnetic field (MF) strength of the MRI device. These parameters serve as input variables for the model, while the relaxation rate R<sub>2</sub> (s<sup>-1</sup>) is taken as the output variable. To develop this model, we employed a machine learning approach based on a neural network known as SA-LOOCV-GRBF (SLG). In this study, we compared two different random selection patterns: SLG disperse random selection (DSLG) and SLG parallel random selection (PSLG). The sensitivity to neuron number in the hidden layers for DSLG was more pronounced compared to the PSLG pattern, and the mean square error (MSE) was calculated for this evaluation. It appears that the PSLG method demonstrated strong performance while maintaining less sensitivity to increasing neuron numbers. Consequently, the new pattern, PSLG, was selected for predicting MRI behavior.

Imaging analysis using Artificial Intelligence to predict outcomes after endovascular aortic aneurysm repair: protocol for a retrospective cohort study.

Lareyre F, Raffort J, Kakkos SK, D'Oria M, Nasr B, Saratzis A, Antoniou GA, Hinchliffe RJ

pubmed logopapersJul 16 2025
Endovascular aortic aneurysm repair (EVAR) requires long-term surveillance to detect and treat postoperative complications. However, prediction models to optimise follow-up strategies are still lacking. The primary objective of this study is to develop predictive models of post-operative outcomes following elective EVAR using Artificial Intelligence (AI)-driven analysis. The secondary objective is to investigate morphological aortic changes following EVAR. This international, multicentre, observational study will retrospectively include 500 patients who underwent elective EVAR. Primary outcomes are EVAR postoperative complications including deaths, re-interventions, endoleaks, limb occlusion and stent-graft migration occurring within 1 year and at mid-term follow-up (1 to 3 years). Secondary outcomes are aortic anatomical changes. Morphological changes following EVAR will be analysed and compared based on preoperative and postoperative CT angiography (CTA) images (within 1 to 12 months, and at the last follow-up) using the AI-based software PRAEVAorta 2 (Nurea). Deep learning algorithms will be applied to stratify the risk of postoperative outcomes into low or high-risk categories. The training and testing dataset will be respectively composed of 70% and 30% of the cohort. The study protocol is designed to ensure that the sponsor and the investigators comply with the principles of the Declaration of Helsinki and the ICH E6 good clinical practice guideline. The study has been approved by the ethics committee of the University Hospital of Patras (Patras, Greece) under the number 492/05.12.2024. The results of the study will be presented at relevant national and international conferences and submitted for publication to peer-review journals.

From Referral to Reporting: The Potential of Large Language Models in the Radiological Workflow.

Fink A, Rau S, Kästingschäfer K, Weiß J, Bamberg F, Russe MF

pubmed logopapersJul 16 2025
Large language models (LLMs) hold great promise for optimizing and supporting radiology workflows amidst rising workloads. This review examines potential applications in daily radiology practice, as well as remaining challenges and potential solutions.Presentation of potential applications and challenges, illustrated with practical examples and concrete optimization suggestions.LLM-based assistance systems have potential applications in almost all language-based process steps of the radiological workflow. Significant progress has been made in areas such as report generation, particularly with retrieval-augmented generation (RAG) and multi-step reasoning approaches. However, challenges related to hallucinations, reproducibility, and data protection, as well as ethical concerns, need to be addressed before widespread implementation.LLMs have immense potential in radiology, particularly for supporting language-based process steps, with technological advances such as RAG and cloud-based approaches potentially accelerating clinical implementation. · LLMs can optimize reporting and other language-based processes in radiology with technologies such as RAG and multi-step reasoning approaches.. · Challenges such as hallucinations, reproducibility, privacy, and ethical concerns must be addressed before widespread adoption.. · RAG and cloud-based approaches could help overcome these challenges and advance the clinical implementation of LLMs.. · Fink A, Rau S, Kästingschäfer K et al. From Referral to Reporting: The Potential of Large Language Models in the Radiological Workflow. Rofo 2025; DOI 10.1055/a-2641-3059.

Site-Level Fine-Tuning with Progressive Layer Freezing: Towards Robust Prediction of Bronchopulmonary Dysplasia from Day-1 Chest Radiographs in Extremely Preterm Infants

Sybelle Goedicke-Fritz, Michelle Bous, Annika Engel, Matthias Flotho, Pascal Hirsch, Hannah Wittig, Dino Milanovic, Dominik Mohr, Mathias Kaspar, Sogand Nemat, Dorothea Kerner, Arno Bücker, Andreas Keller, Sascha Meyer, Michael Zemlin, Philipp Flotho

arxiv logopreprintJul 16 2025
Bronchopulmonary dysplasia (BPD) is a chronic lung disease affecting 35% of extremely low birth weight infants. Defined by oxygen dependence at 36 weeks postmenstrual age, it causes lifelong respiratory complications. However, preventive interventions carry severe risks, including neurodevelopmental impairment, ventilator-induced lung injury, and systemic complications. Therefore, early BPD prognosis and prediction of BPD outcome is crucial to avoid unnecessary toxicity in low risk infants. Admission radiographs of extremely preterm infants are routinely acquired within 24h of life and could serve as a non-invasive prognostic tool. In this work, we developed and investigated a deep learning approach using chest X-rays from 163 extremely low-birth-weight infants ($\leq$32 weeks gestation, 401-999g) obtained within 24 hours of birth. We fine-tuned a ResNet-50 pretrained specifically on adult chest radiographs, employing progressive layer freezing with discriminative learning rates to prevent overfitting and evaluated a CutMix augmentation and linear probing. For moderate/severe BPD outcome prediction, our best performing model with progressive freezing, linear probing and CutMix achieved an AUROC of 0.78 $\pm$ 0.10, balanced accuracy of 0.69 $\pm$ 0.10, and an F1-score of 0.67 $\pm$ 0.11. In-domain pre-training significantly outperformed ImageNet initialization (p = 0.031) which confirms domain-specific pretraining to be important for BPD outcome prediction. Routine IRDS grades showed limited prognostic value (AUROC 0.57 $\pm$ 0.11), confirming the need of learned markers. Our approach demonstrates that domain-specific pretraining enables accurate BPD prediction from routine day-1 radiographs. Through progressive freezing and linear probing, the method remains computationally feasible for site-level implementation and future federated learning deployments.

CT-ScanGaze: A Dataset and Baselines for 3D Volumetric Scanpath Modeling

Trong-Thang Pham, Akash Awasthi, Saba Khan, Esteban Duran Marti, Tien-Phat Nguyen, Khoa Vo, Minh Tran, Ngoc Son Nguyen, Cuong Tran Van, Yuki Ikebe, Anh Totti Nguyen, Anh Nguyen, Zhigang Deng, Carol C. Wu, Hien Van Nguyen, Ngan Le

arxiv logopreprintJul 16 2025
Understanding radiologists' eye movement during Computed Tomography (CT) reading is crucial for developing effective interpretable computer-aided diagnosis systems. However, CT research in this area has been limited by the lack of publicly available eye-tracking datasets and the three-dimensional complexity of CT volumes. To address these challenges, we present the first publicly available eye gaze dataset on CT, called CT-ScanGaze. Then, we introduce CT-Searcher, a novel 3D scanpath predictor designed specifically to process CT volumes and generate radiologist-like 3D fixation sequences, overcoming the limitations of current scanpath predictors that only handle 2D inputs. Since deep learning models benefit from a pretraining step, we develop a pipeline that converts existing 2D gaze datasets into 3D gaze data to pretrain CT-Searcher. Through both qualitative and quantitative evaluations on CT-ScanGaze, we demonstrate the effectiveness of our approach and provide a comprehensive assessment framework for 3D scanpath prediction in medical imaging.

Multimodal neuroimaging unveils basal forebrain-limbic system circuit dysregulation in cognitive impairment with depression: a pathway to early diagnosis and intervention.

Xu X, Anayiti X, Chen P, Xie Z, Tao M, Xiang Y, Tan M, Liu Y, Yue L, Xiao S, Wang P

pubmed logopapersJul 16 2025
Alzheimer's disease (AD) frequently co-occurs with depressive symptoms, exacerbating both cognitive decline and clinical complexity, yet the neural substrates linking this co-occurrence remain poorly understood. We aimed to investigate the role of basal forebrain-limbic system circuit dysregulation in the interaction between cognitive impairment and depressive symptoms, identifying potential biomarkers for early diagnosis and intervention. This cross-sectional study included participants stratified into normal controls (NC), cognitive impairment without depression (CI-nD), and cognitive impairment with depression (CI-D). Multimodal MRI (structural, diffusion, functional, perfusion, iron-sensitive imaging) and plasma biomarkers were analyzed. Machine learning models classified subgroups using neuroimaging features. CI-D exhibited distinct basal forebrain-limbic circuit alterations versus CI-nD and NC: (1) Elevated free-water fraction (FW) in basal forebrain subregions (Ch123/Ch4, p < 0.04), indicating early neuroinflammation; (2) Increased iron deposition in the anterior cingulate cortex and entorhinal cortex (p < 0.05); (3) Hyperperfusion and functional hyperactivity in Ch123 and amygdala; (4) Plasma neurofilamentlightchain exhibited correlated with hippocampal inflammation in CI-nD (p = 0.03) but linked to basal forebrain dysfunction in CI-D (p < 0.05). Multimodal support vector machine achieved 85 % accuracy (AUC=0.96) in distinguishing CI-D from CI-nD, with Ch123 and Ch4 as key discriminators. Pathway analysis in the CI-D group further revealed that FW-related neuroinflammation in the basal forebrain (Ch123/Ch4) indirectly contributed to cognitive impairment via structural atrophy. We identified a neuroinflammatory-cholinergic pathway in the basal forebrain as an early mechanism driving depression-associated cognitive decline. Multimodal imaging revealed distinct spatiotemporal patterns of circuit dysregulation, suggesting neuroinflammation and iron deposition precede structural degeneration. These findings position the basal forebrain-limbic system circuit as a therapeutic target and provide actionable biomarkers for early intervention in AD with depressive symptoms.

Automated microvascular invasion prediction of hepatocellular carcinoma via deep relation reasoning from dynamic contrast-enhanced ultrasound.

Wang Y, Xie W, Li C, Xu Q, Du Z, Zhong Z, Tang L

pubmed logopapersJul 16 2025
Hepatocellular carcinoma (HCC) is a major global health concern, with microvascular invasion (MVI) being a critical prognostic factor linked to early recurrence and poor survival. Preoperative MVI prediction remains challenging, but recent advancements in dynamic contrast-enhanced ultrasound (CEUS) imaging combined with artificial intelligence show promise in improving prediction accuracy. CEUS offers real-time visualization of tumor vascularity, providing unique insights into MVI characteristics. This study proposes a novel deep relation reasoning approach to address the challenges of modeling intricate temporal relationships and extracting complex spatial features from CEUS video frames. Our method integrates CEUS video sequences and introduces a visual graph reasoning framework that correlates intratumoral and peritumoral features across various imaging phases. The system employs dual-path feature extraction, MVI pattern topology construction, Graph Convolutional Network learning, and an MVI pattern discovery module to capture complex features while providing interpretable results. Experimental findings demonstrate that our approach surpasses existing state-of-the-art models in accuracy, sensitivity, and specificity for MVI prediction. The system achieved superiors accuracy, sensitivity, specificity and AUC. These advancements promise to enhance HCC diagnosis and management, potentially revolutionizing patient care. The method's robust performance, even with limited data, underscores its potential for practical clinical application in improving the efficacy and efficiency of HCC patient diagnosis and treatment planning.

SML-Net: Semi-supervised multi-task learning network for carotid plaque segmentation and classification.

Gan H, Liu L, Wang F, Yang Z, Huang Z, Zhou R

pubmed logopapersJul 16 2025
Carotid ultrasound image segmentation and classification are crucial in assessing the severity of carotid plaques which serve as a major cause of ischemic stroke. Although many methods are employed for carotid plaque segmentation and classification, treating these tasks separately neglects their interrelatedness. Currently, there is limited research exploring the key information of both plaque and background regions, and collecting and annotating extensive segmentation data is a costly and time-intensive task. To address these two issues, we propose an end-to-end semi-supervised multi-task learning network(SML-Net), which can classify plaques while performing segmentation. SML-Net identifies regions by extracting image features and fuses multi-scale features to improve semi-supervised segmentation. SML-Net effectively utilizes plaque and background regions from the segmentation results and extracts features from various dimensions, thereby facilitating the classification task. Our experimental results indicate that SML-Net achieves a plaque classification accuracy of 86.59% and a Dice Similarity Coefficient (DSC) of 82.36%. Compared to the leading single-task network, SML-Net improves DSC by 1.2% and accuracy by 1.84%. Similarly, when compared to the best-performing multi-task network, our method achieves a 1.05% increase in DSC and a 2.15% improvement in classification accuracy.
Page 102 of 3993982 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.