Sort by:
Page 113 of 2412410 results

Opportunistic computed tomography (CT) assessment of osteoporosis in patients undergoing transcatheter aortic valve replacement (TAVR).

Paukovitsch M, Fechner T, Felbel D, Moerike J, Rottbauer W, Klömpken S, Brunner H, Kloth C, Beer M, Sekuboyina A, Buckert D, Kirschke JS, Sollmann N

pubmed logopapersJul 17 2025
CT-based opportunistic screening using artificial intelligence finds a high prevalence (43%) of osteoporosis in CT scans obtained for planning of transcatheter aortic valve replacement. Thus, opportunistic screening may be a cost-effective way to assess osteoporosis in high-risk populations. Osteoporosis is an underdiagnosed condition associated with fractures and frailty, but may be detected in routine computed tomography (CT) scans. Volumetric bone mineral density (vBMD) was measured in clinical routine thoraco-abdominal CT scans of 207 patients for planning of transcatheter aortic valve replacement (TAVR) using an artificial intelligence (AI)-based algorithm. 43% of patients had osteoporosis (vBMD < 80 mg/cm<sup>3</sup> L1-L3) and were elderly (83.0 {interquartile range [IQR]: 78.0-85.5} vs. 79.0 {IQR: 71.8-84.0} years, p < 0.001), more often female (55.1 vs. 28.8%, p < 0.001), and had a higher Society of Thoracic Surgeon's score for mortality (3.0 {IQR:1.8-4.6} vs. 2.1 {IQR: 1.4-3.2}%, p < 0.001). In addition to lumbar vBMD (58.2 ± 14.7 vs. 106 ± 21.4 mg/cm<sup>3</sup>, p < 0.001), thoracic vBMD (79.5 ± 17.9 vs. 127.4 ± 26.0 mg/cm<sup>3</sup>, p < 0.001) was also significantly reduced in these patients and showed high diagnostic accuracy for osteoporosis assessment (area under curve: 0.96, p < 0.001). Osteoporotic patients were significantly more often at risk for falls (40.4 vs. 22.9%, p = 0.007) and required help in activities of daily life (ADL) more frequently (48.3 vs. 33.1%, p = 0.026), while direct-to-home discharges were fewer (88.8 vs. 96.6%, p = 0.026). In-hospital bleeding complications (3.4 vs. 5.1%), stroke (1.1 vs. 2.5%), and death (1.1 vs. 0.8%) were equally low, while in-hospital device success was equally high (94.4 vs. 94.9%, p > 0.05 for all comparisons). However, one-year probability of survival was significantly lower (84.0 vs. 98.2%, log-rank p < 0.01). Applying an AI-based algorithm to TAVR planning CT scans can reveal a high rate of 43% patients having osteoporosis. Osteoporosis may represent a marker related to frailty and worsened outcome in TAVR patients.

Hybrid Ensemble Approaches: Optimal Deep Feature Fusion and Hyperparameter-Tuned Classifier Ensembling for Enhanced Brain Tumor Classification

Zahid Ullah, Dragan Pamucar, Jihie Kim

arxiv logopreprintJul 16 2025
Magnetic Resonance Imaging (MRI) is widely recognized as the most reliable tool for detecting tumors due to its capability to produce detailed images that reveal their presence. However, the accuracy of diagnosis can be compromised when human specialists evaluate these images. Factors such as fatigue, limited expertise, and insufficient image detail can lead to errors. For example, small tumors might go unnoticed, or overlap with healthy brain regions could result in misidentification. To address these challenges and enhance diagnostic precision, this study proposes a novel double ensembling framework, consisting of ensembled pre-trained deep learning (DL) models for feature extraction and ensembled fine-tuned hyperparameter machine learning (ML) models to efficiently classify brain tumors. Specifically, our method includes extensive preprocessing and augmentation, transfer learning concepts by utilizing various pre-trained deep convolutional neural networks and vision transformer networks to extract deep features from brain MRI, and fine-tune hyperparameters of ML classifiers. Our experiments utilized three different publicly available Kaggle MRI brain tumor datasets to evaluate the pre-trained DL feature extractor models, ML classifiers, and the effectiveness of an ensemble of deep features along with an ensemble of ML classifiers for brain tumor classification. Our results indicate that the proposed feature fusion and classifier fusion improve upon the state of the art, with hyperparameter fine-tuning providing a significant enhancement over the ensemble method. Additionally, we present an ablation study to illustrate how each component contributes to accurate brain tumor classification.

Automated CAD-RADS scoring from multiplanar CCTA images using radiomics-driven machine learning.

Corti A, Ronchetti F, Lo Iacono F, Chiesa M, Colombo G, Annoni A, Baggiano A, Carerj ML, Del Torto A, Fazzari F, Formenti A, Junod D, Mancini ME, Maragna R, Marchetti F, Sbordone FP, Tassetti L, Volpe A, Mushtaq S, Corino VDA, Pontone G

pubmed logopapersJul 16 2025
Coronary Artery Disease-Reporting and Data System (CAD-RADS), a standardized reporting system of stenosis severity from coronary computed tomography angiography (CCTA), is performed manually by expert radiologists, being time-consuming and prone to interobserver variability. While deep learning methods automating CAD-RADS scoring have been proposed, radiomics-based machine-learning approaches are lacking, despite their improved interpretability. This study aims to introduce a novel radiomics-based machine-learning approach for automating CAD-RADS scoring from CCTA images with multiplanar reconstruction. This retrospective monocentric study included 251 patients (male 70 %; mean age 60.5 ± 12.7) who underwent CCTA in 2016-2018 for clinical evaluation of CAD. Images were automatically segmented, and radiomic features were extracted. Clinical characteristics were collected. The image dataset was partitioned into training and test sets (90 %-10 %). The training phase encompassed feature scaling and selection, data balancing and model training within a 5-fold cross-validation. A cascade pipeline was implemented for both 6-class CAD-RADS scoring and 4-class therapy-oriented classification (0-1, 2, 3-4, 5), through consecutive sub-tasks. For each classification task the cascade pipeline was applied to develop clinical, radiomic, and combined models. The radiomic, combined and clinical models yielded AUC = 0.88 [0.86-0.88], AUC = 0.90 [0.88-0.90], and AUC = 0.66 [0.66-0.67] for the CAD-RADS scoring, and AUC = 0.93 [0.91-0.93], AUC = 0.97 [0.96-0.97], and AUC = 79 [0.78-0.79] for the therapy-oriented classification. The radiomic and combined models significantly outperformed (DeLong p-value < 0.05) the clinical one in class 1 and 2 (CAD-RADS cascade) and class 2 (therapy-oriented cascade). This study represents the first CAD-RADS classification radiomic model, guaranteeing higher explainability and providing a promising support system in coronary artery stenosis assessment.

Automated microvascular invasion prediction of hepatocellular carcinoma via deep relation reasoning from dynamic contrast-enhanced ultrasound.

Wang Y, Xie W, Li C, Xu Q, Du Z, Zhong Z, Tang L

pubmed logopapersJul 16 2025
Hepatocellular carcinoma (HCC) is a major global health concern, with microvascular invasion (MVI) being a critical prognostic factor linked to early recurrence and poor survival. Preoperative MVI prediction remains challenging, but recent advancements in dynamic contrast-enhanced ultrasound (CEUS) imaging combined with artificial intelligence show promise in improving prediction accuracy. CEUS offers real-time visualization of tumor vascularity, providing unique insights into MVI characteristics. This study proposes a novel deep relation reasoning approach to address the challenges of modeling intricate temporal relationships and extracting complex spatial features from CEUS video frames. Our method integrates CEUS video sequences and introduces a visual graph reasoning framework that correlates intratumoral and peritumoral features across various imaging phases. The system employs dual-path feature extraction, MVI pattern topology construction, Graph Convolutional Network learning, and an MVI pattern discovery module to capture complex features while providing interpretable results. Experimental findings demonstrate that our approach surpasses existing state-of-the-art models in accuracy, sensitivity, and specificity for MVI prediction. The system achieved superiors accuracy, sensitivity, specificity and AUC. These advancements promise to enhance HCC diagnosis and management, potentially revolutionizing patient care. The method's robust performance, even with limited data, underscores its potential for practical clinical application in improving the efficacy and efficiency of HCC patient diagnosis and treatment planning.

Multimodal neuroimaging unveils basal forebrain-limbic system circuit dysregulation in cognitive impairment with depression: a pathway to early diagnosis and intervention.

Xu X, Anayiti X, Chen P, Xie Z, Tao M, Xiang Y, Tan M, Liu Y, Yue L, Xiao S, Wang P

pubmed logopapersJul 16 2025
Alzheimer's disease (AD) frequently co-occurs with depressive symptoms, exacerbating both cognitive decline and clinical complexity, yet the neural substrates linking this co-occurrence remain poorly understood. We aimed to investigate the role of basal forebrain-limbic system circuit dysregulation in the interaction between cognitive impairment and depressive symptoms, identifying potential biomarkers for early diagnosis and intervention. This cross-sectional study included participants stratified into normal controls (NC), cognitive impairment without depression (CI-nD), and cognitive impairment with depression (CI-D). Multimodal MRI (structural, diffusion, functional, perfusion, iron-sensitive imaging) and plasma biomarkers were analyzed. Machine learning models classified subgroups using neuroimaging features. CI-D exhibited distinct basal forebrain-limbic circuit alterations versus CI-nD and NC: (1) Elevated free-water fraction (FW) in basal forebrain subregions (Ch123/Ch4, p < 0.04), indicating early neuroinflammation; (2) Increased iron deposition in the anterior cingulate cortex and entorhinal cortex (p < 0.05); (3) Hyperperfusion and functional hyperactivity in Ch123 and amygdala; (4) Plasma neurofilamentlightchain exhibited correlated with hippocampal inflammation in CI-nD (p = 0.03) but linked to basal forebrain dysfunction in CI-D (p < 0.05). Multimodal support vector machine achieved 85 % accuracy (AUC=0.96) in distinguishing CI-D from CI-nD, with Ch123 and Ch4 as key discriminators. Pathway analysis in the CI-D group further revealed that FW-related neuroinflammation in the basal forebrain (Ch123/Ch4) indirectly contributed to cognitive impairment via structural atrophy. We identified a neuroinflammatory-cholinergic pathway in the basal forebrain as an early mechanism driving depression-associated cognitive decline. Multimodal imaging revealed distinct spatiotemporal patterns of circuit dysregulation, suggesting neuroinflammation and iron deposition precede structural degeneration. These findings position the basal forebrain-limbic system circuit as a therapeutic target and provide actionable biomarkers for early intervention in AD with depressive symptoms.

Multi-DECT image-based radiomics with interpretable machine learning for preoperative prediction of tumor budding grade and prognosis in colorectal cancer: a dual-center study.

Lin G, Chen W, Chen Y, Cao J, Mao W, Xia S, Chen M, Xu M, Lu C, Ji J

pubmed logopapersJul 16 2025
This study evaluates the predictive ability of multiparametric dual-energy computed tomography (multi-DECT) radiomics for tumor budding (TB) grade and prognosis in patients with colorectal cancer (CRC). This study comprised 510 CRC patients at two institutions. The radiomics features of multi-DECT images (including polyenergetic, virtual monoenergetic, iodine concentration [IC], and effective atomic number images) were screened to build radiomics models utilizing nine machine learning (ML) algorithms. An ML-based fusion model comprising clinical-radiological variables and radiomics features was developed. The assessment of model performance was conducted through the area under the receiver operating characteristic curve (AUC), while the model's interpretability was assessed by shapley additive explanation (SHAP). The prognostic significance of the fusion model was determined via survival analysis. The CT-reported lymph node status and normalized IC were used to develop a clinical-radiological model. Among the nine examined ML algorithms, the extreme gradient boosting (XGB) algorithm performed best. The XGB-based fusion model containing multi-DECT radiomics features outperformed the clinical-radiological model in predicting TB grade, demonstrating superior AUCs of 0.969 in the training cohort, 0.934 in the internal validation cohort, and 0.897 in the external validation cohort. The SHAP analysis identified variables influencing model predictions. Patients with a model-predicted high TB grade had worse recurrence-free survival (RFS) in both the training (P < 0.001) and internal validation (P = 0.016) cohorts. The XGB-based fusion model using multi-DECT radiomics could serve as a non-invasive tool to predict TB grade and RFS in patients with CRC preoperatively.

CT-ScanGaze: A Dataset and Baselines for 3D Volumetric Scanpath Modeling

Trong-Thang Pham, Akash Awasthi, Saba Khan, Esteban Duran Marti, Tien-Phat Nguyen, Khoa Vo, Minh Tran, Ngoc Son Nguyen, Cuong Tran Van, Yuki Ikebe, Anh Totti Nguyen, Anh Nguyen, Zhigang Deng, Carol C. Wu, Hien Van Nguyen, Ngan Le

arxiv logopreprintJul 16 2025
Understanding radiologists' eye movement during Computed Tomography (CT) reading is crucial for developing effective interpretable computer-aided diagnosis systems. However, CT research in this area has been limited by the lack of publicly available eye-tracking datasets and the three-dimensional complexity of CT volumes. To address these challenges, we present the first publicly available eye gaze dataset on CT, called CT-ScanGaze. Then, we introduce CT-Searcher, a novel 3D scanpath predictor designed specifically to process CT volumes and generate radiologist-like 3D fixation sequences, overcoming the limitations of current scanpath predictors that only handle 2D inputs. Since deep learning models benefit from a pretraining step, we develop a pipeline that converts existing 2D gaze datasets into 3D gaze data to pretrain CT-Searcher. Through both qualitative and quantitative evaluations on CT-ScanGaze, we demonstrate the effectiveness of our approach and provide a comprehensive assessment framework for 3D scanpath prediction in medical imaging.

Site-Level Fine-Tuning with Progressive Layer Freezing: Towards Robust Prediction of Bronchopulmonary Dysplasia from Day-1 Chest Radiographs in Extremely Preterm Infants

Sybelle Goedicke-Fritz, Michelle Bous, Annika Engel, Matthias Flotho, Pascal Hirsch, Hannah Wittig, Dino Milanovic, Dominik Mohr, Mathias Kaspar, Sogand Nemat, Dorothea Kerner, Arno Bücker, Andreas Keller, Sascha Meyer, Michael Zemlin, Philipp Flotho

arxiv logopreprintJul 16 2025
Bronchopulmonary dysplasia (BPD) is a chronic lung disease affecting 35% of extremely low birth weight infants. Defined by oxygen dependence at 36 weeks postmenstrual age, it causes lifelong respiratory complications. However, preventive interventions carry severe risks, including neurodevelopmental impairment, ventilator-induced lung injury, and systemic complications. Therefore, early BPD prognosis and prediction of BPD outcome is crucial to avoid unnecessary toxicity in low risk infants. Admission radiographs of extremely preterm infants are routinely acquired within 24h of life and could serve as a non-invasive prognostic tool. In this work, we developed and investigated a deep learning approach using chest X-rays from 163 extremely low-birth-weight infants ($\leq$32 weeks gestation, 401-999g) obtained within 24 hours of birth. We fine-tuned a ResNet-50 pretrained specifically on adult chest radiographs, employing progressive layer freezing with discriminative learning rates to prevent overfitting and evaluated a CutMix augmentation and linear probing. For moderate/severe BPD outcome prediction, our best performing model with progressive freezing, linear probing and CutMix achieved an AUROC of 0.78 $\pm$ 0.10, balanced accuracy of 0.69 $\pm$ 0.10, and an F1-score of 0.67 $\pm$ 0.11. In-domain pre-training significantly outperformed ImageNet initialization (p = 0.031) which confirms domain-specific pretraining to be important for BPD outcome prediction. Routine IRDS grades showed limited prognostic value (AUROC 0.57 $\pm$ 0.11), confirming the need of learned markers. Our approach demonstrates that domain-specific pretraining enables accurate BPD prediction from routine day-1 radiographs. Through progressive freezing and linear probing, the method remains computationally feasible for site-level implementation and future federated learning deployments.

Imaging analysis using Artificial Intelligence to predict outcomes after endovascular aortic aneurysm repair: protocol for a retrospective cohort study.

Lareyre F, Raffort J, Kakkos SK, D'Oria M, Nasr B, Saratzis A, Antoniou GA, Hinchliffe RJ

pubmed logopapersJul 16 2025
Endovascular aortic aneurysm repair (EVAR) requires long-term surveillance to detect and treat postoperative complications. However, prediction models to optimise follow-up strategies are still lacking. The primary objective of this study is to develop predictive models of post-operative outcomes following elective EVAR using Artificial Intelligence (AI)-driven analysis. The secondary objective is to investigate morphological aortic changes following EVAR. This international, multicentre, observational study will retrospectively include 500 patients who underwent elective EVAR. Primary outcomes are EVAR postoperative complications including deaths, re-interventions, endoleaks, limb occlusion and stent-graft migration occurring within 1 year and at mid-term follow-up (1 to 3 years). Secondary outcomes are aortic anatomical changes. Morphological changes following EVAR will be analysed and compared based on preoperative and postoperative CT angiography (CTA) images (within 1 to 12 months, and at the last follow-up) using the AI-based software PRAEVAorta 2 (Nurea). Deep learning algorithms will be applied to stratify the risk of postoperative outcomes into low or high-risk categories. The training and testing dataset will be respectively composed of 70% and 30% of the cohort. The study protocol is designed to ensure that the sponsor and the investigators comply with the principles of the Declaration of Helsinki and the ICH E6 good clinical practice guideline. The study has been approved by the ethics committee of the University Hospital of Patras (Patras, Greece) under the number 492/05.12.2024. The results of the study will be presented at relevant national and international conferences and submitted for publication to peer-review journals.

An end-to-end interpretable machine-learning-based framework for early-stage diagnosis of gallbladder cancer using multi-modality medical data.

Zhao H, Miao C, Zhu Y, Shu Y, Wu X, Yin Z, Deng X, Gong W, Yang Z, Zou W

pubmed logopapersJul 16 2025
The accurate early-stage diagnosis of gallbladder cancer (GBC) is regarded as one of the major challenges in the field of oncology. However, few studies have focused on the comprehensive classification of GBC based on multiple modalities. This study aims to develop a comprehensive diagnostic framework for GBC based on both imaging and non-imaging medical data. This retrospective study reviewed 298 clinical patients with gallbladder disease or volunteers from two devices. A novel end-to-end interpretable diagnostic framework for GBC is proposed to handle multiple medical modalities, including CT imaging, demographics, tumor markers, coagulation function tests, and routine blood tests. To achieve better feature extraction and fusion of the imaging modality, a novel global-hybrid-local network, namely GHL-Net, has also been developed. The ensemble learning strategy is employed to fuse multi-modality data and obtain the final classification result. In addition, two interpretable methods are applied to help clinicians understand the model-based decisions. Model performance was evaluated through accuracy, precision, specificity, sensitivity, F1-score, area under the curve (AUC), and matthews correlation coefficient (MCC). In both binary and multi-class classification scenarios, the proposed method showed better performance compared to other comparison methods in both datasets. Especially in the binary classification scenario, the proposed method achieved the highest accuracy, sensitivity, specificity, precision, F1-score, ROC-AUC, PR-AUC, and MCC of 95.24%, 93.55%, 96.87%, 96.67%, 95.08%, 0.9591, 0.9636, and 0.9051, respectively. The visualization results obtained based on the interpretable methods also demonstrated a high clinical relevance of the intermediate decision-making processes. Ablation studies then provided an in-depth understanding of our methodology. The machine learning-based framework can effectively improve the accuracy of GBC diagnosis and is expected to have a more significant impact in other cancer diagnosis scenarios.
Page 113 of 2412410 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.