Sort by:
Page 116 of 3453445 results

An end-to-end interpretable machine-learning-based framework for early-stage diagnosis of gallbladder cancer using multi-modality medical data.

Zhao H, Miao C, Zhu Y, Shu Y, Wu X, Yin Z, Deng X, Gong W, Yang Z, Zou W

pubmed logopapersJul 16 2025
The accurate early-stage diagnosis of gallbladder cancer (GBC) is regarded as one of the major challenges in the field of oncology. However, few studies have focused on the comprehensive classification of GBC based on multiple modalities. This study aims to develop a comprehensive diagnostic framework for GBC based on both imaging and non-imaging medical data. This retrospective study reviewed 298 clinical patients with gallbladder disease or volunteers from two devices. A novel end-to-end interpretable diagnostic framework for GBC is proposed to handle multiple medical modalities, including CT imaging, demographics, tumor markers, coagulation function tests, and routine blood tests. To achieve better feature extraction and fusion of the imaging modality, a novel global-hybrid-local network, namely GHL-Net, has also been developed. The ensemble learning strategy is employed to fuse multi-modality data and obtain the final classification result. In addition, two interpretable methods are applied to help clinicians understand the model-based decisions. Model performance was evaluated through accuracy, precision, specificity, sensitivity, F1-score, area under the curve (AUC), and matthews correlation coefficient (MCC). In both binary and multi-class classification scenarios, the proposed method showed better performance compared to other comparison methods in both datasets. Especially in the binary classification scenario, the proposed method achieved the highest accuracy, sensitivity, specificity, precision, F1-score, ROC-AUC, PR-AUC, and MCC of 95.24%, 93.55%, 96.87%, 96.67%, 95.08%, 0.9591, 0.9636, and 0.9051, respectively. The visualization results obtained based on the interpretable methods also demonstrated a high clinical relevance of the intermediate decision-making processes. Ablation studies then provided an in-depth understanding of our methodology. The machine learning-based framework can effectively improve the accuracy of GBC diagnosis and is expected to have a more significant impact in other cancer diagnosis scenarios.

Utilizing machine learning to predict MRI signal outputs from iron oxide nanoparticles through the PSLG algorithm.

Hataminia F, Azinfar A

pubmed logopapersJul 16 2025
In this research, we predict the output signal generated by iron oxide-based nanoparticles in Magnetic Resonance Imaging (MRI) using the physical properties of the nanoparticles and the MRI machine. The parameters considered include the size of the magnetic core of the nanoparticles, their magnetic saturation (Ms), the concentration of the nanoparticles (C), and the magnetic field (MF) strength of the MRI device. These parameters serve as input variables for the model, while the relaxation rate R<sub>2</sub> (s<sup>-1</sup>) is taken as the output variable. To develop this model, we employed a machine learning approach based on a neural network known as SA-LOOCV-GRBF (SLG). In this study, we compared two different random selection patterns: SLG disperse random selection (DSLG) and SLG parallel random selection (PSLG). The sensitivity to neuron number in the hidden layers for DSLG was more pronounced compared to the PSLG pattern, and the mean square error (MSE) was calculated for this evaluation. It appears that the PSLG method demonstrated strong performance while maintaining less sensitivity to increasing neuron numbers. Consequently, the new pattern, PSLG, was selected for predicting MRI behavior.

Imaging analysis using Artificial Intelligence to predict outcomes after endovascular aortic aneurysm repair: protocol for a retrospective cohort study.

Lareyre F, Raffort J, Kakkos SK, D'Oria M, Nasr B, Saratzis A, Antoniou GA, Hinchliffe RJ

pubmed logopapersJul 16 2025
Endovascular aortic aneurysm repair (EVAR) requires long-term surveillance to detect and treat postoperative complications. However, prediction models to optimise follow-up strategies are still lacking. The primary objective of this study is to develop predictive models of post-operative outcomes following elective EVAR using Artificial Intelligence (AI)-driven analysis. The secondary objective is to investigate morphological aortic changes following EVAR. This international, multicentre, observational study will retrospectively include 500 patients who underwent elective EVAR. Primary outcomes are EVAR postoperative complications including deaths, re-interventions, endoleaks, limb occlusion and stent-graft migration occurring within 1 year and at mid-term follow-up (1 to 3 years). Secondary outcomes are aortic anatomical changes. Morphological changes following EVAR will be analysed and compared based on preoperative and postoperative CT angiography (CTA) images (within 1 to 12 months, and at the last follow-up) using the AI-based software PRAEVAorta 2 (Nurea). Deep learning algorithms will be applied to stratify the risk of postoperative outcomes into low or high-risk categories. The training and testing dataset will be respectively composed of 70% and 30% of the cohort. The study protocol is designed to ensure that the sponsor and the investigators comply with the principles of the Declaration of Helsinki and the ICH E6 good clinical practice guideline. The study has been approved by the ethics committee of the University Hospital of Patras (Patras, Greece) under the number 492/05.12.2024. The results of the study will be presented at relevant national and international conferences and submitted for publication to peer-review journals.

From Referral to Reporting: The Potential of Large Language Models in the Radiological Workflow.

Fink A, Rau S, Kästingschäfer K, Weiß J, Bamberg F, Russe MF

pubmed logopapersJul 16 2025
Large language models (LLMs) hold great promise for optimizing and supporting radiology workflows amidst rising workloads. This review examines potential applications in daily radiology practice, as well as remaining challenges and potential solutions.Presentation of potential applications and challenges, illustrated with practical examples and concrete optimization suggestions.LLM-based assistance systems have potential applications in almost all language-based process steps of the radiological workflow. Significant progress has been made in areas such as report generation, particularly with retrieval-augmented generation (RAG) and multi-step reasoning approaches. However, challenges related to hallucinations, reproducibility, and data protection, as well as ethical concerns, need to be addressed before widespread implementation.LLMs have immense potential in radiology, particularly for supporting language-based process steps, with technological advances such as RAG and cloud-based approaches potentially accelerating clinical implementation. · LLMs can optimize reporting and other language-based processes in radiology with technologies such as RAG and multi-step reasoning approaches.. · Challenges such as hallucinations, reproducibility, privacy, and ethical concerns must be addressed before widespread adoption.. · RAG and cloud-based approaches could help overcome these challenges and advance the clinical implementation of LLMs.. · Fink A, Rau S, Kästingschäfer K et al. From Referral to Reporting: The Potential of Large Language Models in the Radiological Workflow. Rofo 2025; DOI 10.1055/a-2641-3059.

Multimodal neuroimaging unveils basal forebrain-limbic system circuit dysregulation in cognitive impairment with depression: a pathway to early diagnosis and intervention.

Xu X, Anayiti X, Chen P, Xie Z, Tao M, Xiang Y, Tan M, Liu Y, Yue L, Xiao S, Wang P

pubmed logopapersJul 16 2025
Alzheimer's disease (AD) frequently co-occurs with depressive symptoms, exacerbating both cognitive decline and clinical complexity, yet the neural substrates linking this co-occurrence remain poorly understood. We aimed to investigate the role of basal forebrain-limbic system circuit dysregulation in the interaction between cognitive impairment and depressive symptoms, identifying potential biomarkers for early diagnosis and intervention. This cross-sectional study included participants stratified into normal controls (NC), cognitive impairment without depression (CI-nD), and cognitive impairment with depression (CI-D). Multimodal MRI (structural, diffusion, functional, perfusion, iron-sensitive imaging) and plasma biomarkers were analyzed. Machine learning models classified subgroups using neuroimaging features. CI-D exhibited distinct basal forebrain-limbic circuit alterations versus CI-nD and NC: (1) Elevated free-water fraction (FW) in basal forebrain subregions (Ch123/Ch4, p < 0.04), indicating early neuroinflammation; (2) Increased iron deposition in the anterior cingulate cortex and entorhinal cortex (p < 0.05); (3) Hyperperfusion and functional hyperactivity in Ch123 and amygdala; (4) Plasma neurofilamentlightchain exhibited correlated with hippocampal inflammation in CI-nD (p = 0.03) but linked to basal forebrain dysfunction in CI-D (p < 0.05). Multimodal support vector machine achieved 85 % accuracy (AUC=0.96) in distinguishing CI-D from CI-nD, with Ch123 and Ch4 as key discriminators. Pathway analysis in the CI-D group further revealed that FW-related neuroinflammation in the basal forebrain (Ch123/Ch4) indirectly contributed to cognitive impairment via structural atrophy. We identified a neuroinflammatory-cholinergic pathway in the basal forebrain as an early mechanism driving depression-associated cognitive decline. Multimodal imaging revealed distinct spatiotemporal patterns of circuit dysregulation, suggesting neuroinflammation and iron deposition precede structural degeneration. These findings position the basal forebrain-limbic system circuit as a therapeutic target and provide actionable biomarkers for early intervention in AD with depressive symptoms.

Automated microvascular invasion prediction of hepatocellular carcinoma via deep relation reasoning from dynamic contrast-enhanced ultrasound.

Wang Y, Xie W, Li C, Xu Q, Du Z, Zhong Z, Tang L

pubmed logopapersJul 16 2025
Hepatocellular carcinoma (HCC) is a major global health concern, with microvascular invasion (MVI) being a critical prognostic factor linked to early recurrence and poor survival. Preoperative MVI prediction remains challenging, but recent advancements in dynamic contrast-enhanced ultrasound (CEUS) imaging combined with artificial intelligence show promise in improving prediction accuracy. CEUS offers real-time visualization of tumor vascularity, providing unique insights into MVI characteristics. This study proposes a novel deep relation reasoning approach to address the challenges of modeling intricate temporal relationships and extracting complex spatial features from CEUS video frames. Our method integrates CEUS video sequences and introduces a visual graph reasoning framework that correlates intratumoral and peritumoral features across various imaging phases. The system employs dual-path feature extraction, MVI pattern topology construction, Graph Convolutional Network learning, and an MVI pattern discovery module to capture complex features while providing interpretable results. Experimental findings demonstrate that our approach surpasses existing state-of-the-art models in accuracy, sensitivity, and specificity for MVI prediction. The system achieved superiors accuracy, sensitivity, specificity and AUC. These advancements promise to enhance HCC diagnosis and management, potentially revolutionizing patient care. The method's robust performance, even with limited data, underscores its potential for practical clinical application in improving the efficacy and efficiency of HCC patient diagnosis and treatment planning.

SML-Net: Semi-supervised multi-task learning network for carotid plaque segmentation and classification.

Gan H, Liu L, Wang F, Yang Z, Huang Z, Zhou R

pubmed logopapersJul 16 2025
Carotid ultrasound image segmentation and classification are crucial in assessing the severity of carotid plaques which serve as a major cause of ischemic stroke. Although many methods are employed for carotid plaque segmentation and classification, treating these tasks separately neglects their interrelatedness. Currently, there is limited research exploring the key information of both plaque and background regions, and collecting and annotating extensive segmentation data is a costly and time-intensive task. To address these two issues, we propose an end-to-end semi-supervised multi-task learning network(SML-Net), which can classify plaques while performing segmentation. SML-Net identifies regions by extracting image features and fuses multi-scale features to improve semi-supervised segmentation. SML-Net effectively utilizes plaque and background regions from the segmentation results and extracts features from various dimensions, thereby facilitating the classification task. Our experimental results indicate that SML-Net achieves a plaque classification accuracy of 86.59% and a Dice Similarity Coefficient (DSC) of 82.36%. Compared to the leading single-task network, SML-Net improves DSC by 1.2% and accuracy by 1.84%. Similarly, when compared to the best-performing multi-task network, our method achieves a 1.05% increase in DSC and a 2.15% improvement in classification accuracy.

Automated CAD-RADS scoring from multiplanar CCTA images using radiomics-driven machine learning.

Corti A, Ronchetti F, Lo Iacono F, Chiesa M, Colombo G, Annoni A, Baggiano A, Carerj ML, Del Torto A, Fazzari F, Formenti A, Junod D, Mancini ME, Maragna R, Marchetti F, Sbordone FP, Tassetti L, Volpe A, Mushtaq S, Corino VDA, Pontone G

pubmed logopapersJul 16 2025
Coronary Artery Disease-Reporting and Data System (CAD-RADS), a standardized reporting system of stenosis severity from coronary computed tomography angiography (CCTA), is performed manually by expert radiologists, being time-consuming and prone to interobserver variability. While deep learning methods automating CAD-RADS scoring have been proposed, radiomics-based machine-learning approaches are lacking, despite their improved interpretability. This study aims to introduce a novel radiomics-based machine-learning approach for automating CAD-RADS scoring from CCTA images with multiplanar reconstruction. This retrospective monocentric study included 251 patients (male 70 %; mean age 60.5 ± 12.7) who underwent CCTA in 2016-2018 for clinical evaluation of CAD. Images were automatically segmented, and radiomic features were extracted. Clinical characteristics were collected. The image dataset was partitioned into training and test sets (90 %-10 %). The training phase encompassed feature scaling and selection, data balancing and model training within a 5-fold cross-validation. A cascade pipeline was implemented for both 6-class CAD-RADS scoring and 4-class therapy-oriented classification (0-1, 2, 3-4, 5), through consecutive sub-tasks. For each classification task the cascade pipeline was applied to develop clinical, radiomic, and combined models. The radiomic, combined and clinical models yielded AUC = 0.88 [0.86-0.88], AUC = 0.90 [0.88-0.90], and AUC = 0.66 [0.66-0.67] for the CAD-RADS scoring, and AUC = 0.93 [0.91-0.93], AUC = 0.97 [0.96-0.97], and AUC = 79 [0.78-0.79] for the therapy-oriented classification. The radiomic and combined models significantly outperformed (DeLong p-value < 0.05) the clinical one in class 1 and 2 (CAD-RADS cascade) and class 2 (therapy-oriented cascade). This study represents the first CAD-RADS classification radiomic model, guaranteeing higher explainability and providing a promising support system in coronary artery stenosis assessment.

Validation of artificial intelligence software for automatic calcium scoring in cardiac and chest computed tomography.

Hamelink II, Nie ZZ, Severijn TEJT, van Tuinen MM, van Ooijen PMAP, Kwee TCT, Dorrius MDM, van der Harst PP, Vliegenthart RR

pubmed logopapersJul 16 2025
Coronary artery calcium scoring (CACS), i.e. quantification of Agatston (AS) or volume score (VS), can be time consuming. The aim of this study was to compare automated, artificial intelligence (AI)-based CACS to manual scoring, in cardiac and chest CT for lung cancer screening. We selected 684 participants (59 ± 4.8 years; 48.8 % men) who underwent cardiac and non-ECG-triggered chest CT, including 484 participants with AS > 0 on cardiac CT. AI-based results were compared to manual AS and VS, by assessing sensitivity and accuracy, intraclass correlation coefficient (ICC), Bland-Altman analysis and Cohen's kappa for classification in AS strata (0;1-99;100-299;≥300). AI showed high CAC detection rate: 98.1% in cardiac CT (accuracy 97.1%) and 92.4% in chest CT (accuracy 92.1%). AI showed excellent agreement with manual AS (ICC:0.997 and 0.992) and manual VS (ICC:0.997 and 0.991), in cardiac CT and chest CT, respectively. In Bland-Altman analysis, there was a mean difference of 2.3 (limits of agreement (LoA):-42.7, 47.4) for AS on cardiac CT; 1.9 (LoA:-36.4, 40.2) for VS on cardiac CT; -0.3 (LoA:-74.8, 74.2) for AS on chest CT; and -0.6 (LoA:-65.7, 64.5) for VS on chest CT. Cohen's kappa was 0.952 (95%CI:0.934-0.970) for cardiac CT and 0.901 (95%CI:0.875-0.926) for chest CT, with concordance in 95.9 and 91.4% of cases, respectively. AI-based CACS shows high detection rate and strong correlation compared to manual CACS, with excellent risk classification agreement. AI may reduce evaluation time and enable opportunistic screening for CAC on low-dose chest CT.
Page 116 of 3453445 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.