Sort by:
Page 58 of 3463455 results

Bronchiectasis in patients with chronic obstructive pulmonary disease: AI-based CT quantification using the bronchial tapering ratio.

Park H, Choe J, Lee SM, Lim S, Lee JS, Oh YM, Lee JB, Hwang HJ, Yun J, Bae S, Yu D, Loh LC, Ong CK, Seo JB

pubmed logopapersAug 26 2025
Although chest CT is the primary tool for evaluating bronchiectasis, accurately measuring its extent poses challenges. This study aimed to automatically quantify bronchiectasis using an artificial intelligence (AI)-based analysis of the bronchial tapering ratio on chest CT and assess its association with clinical outcomes in patients with chronic obstructive pulmonary disease (COPD). COPD patients from two prospective multicenter cohorts were included. AI-based airway quantification was performed on baseline CT, measuring the tapering ratio for each bronchus in the whole lung. The bronchiectasis score accounting for the extent of bronchi with abnormal tapering (inner lumen tapering ratio ≥ 1.1, indicating airway dilatation) in the whole lung was calculated. Associations between the bronchiectasis score and all-cause mortality and acute exacerbation (AE) were assessed using multivariable models. The discovery and validation cohorts included 361 (mean age, 67 years; 97.5% men) and 112 patients (mean age, 67 years; 93.7% men), respectively. In the discovery cohort, 220 (60.9%) had a history of at least one AE and 59 (16.3%) died during follow-up, and 18 (16.1%) died in the validation cohort. Bronchiectasis score was independently associated with increased mortality (discovery: adjusted HR, 1.86 [95% CI: 1.08-3.18]; validation: HR, 5.42 [95% CI: 1.97-14.92]). The score was also associated with risk of any AE, severe AE, and shorter time to first AE (for all, p < 0.05). In patients with COPD, the quantified extent of bronchiectasis using AI-based CT quantification of the bronchial tapering ratio was associated with all-cause mortality and the risk of AE over time. Question Can AI-based CT quantification of bronchial tapering reliably assess bronchiectasis relevant to clinical outcomes in patients with COPD? Findings Scores from this AI-based method of automatically quantifying the extent of whole lung bronchiectasis were independently associated with all-cause mortality and risk of AEs in COPD patients. Clinical relevance AI-based bronchiectasis analysis on CT may shift clinical research toward more objective, quantitative assessment methods and support risk stratification and management in COPD, highlighting its potential to enhance clinically relevant imaging evaluation.

Relation knowledge distillation 3D-ResNet-based deep learning for breast cancer molecular subtypes prediction on ultrasound videos: a multicenter study.

Wu Y, Zhou L, Zhao J, Peng Y, Li X, Wang Y, Zhu S, Hou C, Du P, Ling L, Wang Y, Tian J, Sun L

pubmed logopapersAug 26 2025
To develop and test a relation knowledge distillation three-dimensional residual network (RKD-R3D) model for predicting breast cancer molecular subtypes using ultrasound (US) videos to aid clinical personalized management. This multicentre study retrospectively included 882 breast cancer patients (2375 US videos and 9499 images) between January 2017 and December 2021, which was divided into training, validation, and internal test cohorts. Additionally, 86 patients was collected between May 2023 and November 2023 as the external test cohort. St. Gallen molecular subtypes (luminal A, luminal B, HER2-positive, and triple-negative) were confirmed via postoperative immunohistochemistry. The RKD-R3D based on US videos was developed and validated to predict four-classification molecular subtypes of breast cancer. The predictive performance of RKD-R3D was compared with RKD-R2D, traditional R3D, and preoperative core needle biopsy (CNB). The area under the receiver operating characteristic curve (AUC), sensitivity, specificity, accuracy, balanced accuracy, precision, recall, and F1-score were analyzed. RKD-R3D (AUC: 0.88, 0.95) outperformed RKD-R2D (AUC: 0.72, 0.85) and traditional R3D (AUC: 0.65, 0.79) in predicting four-classification breast cancer molecular subtypes in the internal and external test cohorts. RKD-R3D outperformed CNB (Accuracy: 0.87 vs. 0.79) in the external test cohort, achieved good performance in predicting triple negative from non-triple negative breast cancers (AUC: 0.98), and obtained satisfactory prediction performance for both T1 and non-T1 lesions (AUC: 0.96, 0.90). RKD-R3D when used with US videos becomes a potential supplementary tool to non-invasively assess breast cancer molecular subtypes.

Optimized deep learning for brain tumor detection: a hybrid approach with attention mechanisms and clinical explainability.

Aiya AJ, Wani N, Ramani M, Kumar A, Pant S, Kotecha K, Kulkarni A, Al-Danakh A

pubmed logopapersAug 26 2025
Brain tumor classification (BTC) from Magnetic Resonance Imaging (MRI) is a critical diagnosis task, which is highly important for treatment planning. In this study, we propose a hybrid deep learning (DL) model that integrates VGG16, an attention mechanism, and optimized hyperparameters to classify brain tumors into different categories as glioma, meningioma, pituitary tumor, and no tumor. The approach leverages state-of-the-art preprocessing techniques, transfer learning, and Gradient-weighted Class Activation Mapping (Grad-CAM) visualization on a dataset of 7023 MRI images to enhance both performance and interpretability. The proposed model achieves 99% test accuracy and impressive precision and recall figures and outperforms traditional approaches like Support Vector Machines (SVM) with Histogram of Oriented Gradients (HOG), Local Binary Pattern (LBP) and Principal Component Analysis (PCA) by a significant margin. Moreover, the model eliminates the need for manual labelling-a common challenge in this domain-by employing end-to-end learning, which allows the proposed model to derive meaningful features hence reducing human input. The integration of attention mechanisms further promote feature selection, in turn improving classification accuracy, while Grad-CAM visualizations show which regions of the image had the greatest impact on classification decisions, leading to increased transparency in clinical settings. Overall, the synergy of superior prediction, automatic feature extraction, and improved predictability confirms the model as an important application to neural networks approaches for brain tumor classification with valuable potential for enhancing medical imaging (MI) and clinical decision-making.

Adverse cardiovascular events in coronary Plaques not undeRgoing pErcutaneous coronary intervention evaluateD with optIcal Coherence Tomography. The PREDICT-AI risk model.

Bruno F, Immobile Molaro M, Sperti M, Bianchini F, Chu M, Cardaci C, Wańha W, Gasior P, Zecchino S, Pavani M, Vergallo R, Biscaglia S, Cerrato E, Secco GG, Mennuni M, Mancone M, De Filippo O, Mattesini A, Canova P, Boi A, Ugo F, Scarsini R, Costa F, Fabris E, Campo G, Wojakowski W, Morbiducci U, Deriu M, Tu S, Piccolo R, D'Ascenzo F, Chiastra C, Burzotta F

pubmed logopapersAug 26 2025
Most acute coronary syndromes (ACS) originate from coronary plaques that are angiographically mild and not flow limiting. These lesions, often characterised by thin-cap fibroatheroma, large lipid cores and macrophage infiltration, are termed 'vulnerable plaques' and are associated with a heightened risk of future major adverse cardiovascular events (MACE). However, current imaging modalities lack robust predictive power, and treatment strategies for such plaques remain controversial. The PREDICT-AI study aims to develop and externally validate a machine learning (ML)-based risk score that integrates optical coherence tomography (OCT) plaque features and patient-level clinical data to predict the natural history of non-flow-limiting coronary lesions not treated with percutaneous coronary intervention (PCI). This is a multicentre, prospective, observational study enrolling 500 patients with recent ACS who undergo comprehensive three-vessel OCT imaging. Lesions not treated with PCI will be characterised using artificial intelligence (AI)-based plaque analysis (OctPlus software), including quantification of fibrous cap thickness, lipid arc, macrophage presence and other microstructural features. A three-step ML pipeline will be used to derive and validate a risk score predicting MACE at follow-up. Outcomes will be adjudicated blinded to OCT findings. The primary endpoint is MACE (composite of cardiovascular death, myocardial infarction, urgent revascularisation or target vessel revascularisation). Event prediction will be assessed at both the patient level and plaque level. The PREDICT-AI study will generate a clinically applicable, AI-driven risk stratification tool based on high-resolution intracoronary imaging. By identifying high-risk, non-obstructive coronary plaques, this model may enhance personalised management strategies and support the transition towards precision medicine in coronary artery disease.

A Fully Automated 3D CT U-Net Framework for Segmentation and Measurement of the Masseter Muscle, Innovatively Incorporating a Self-Supervised Algorithm to Effectively Reduce Sample Size: A Validation Study in East Asian Populations.

Qiu X, Han W, Wang L, Chai G

pubmed logopapersAug 26 2025
The segmentation and volume measurement of the masseter muscle play an important role in radiological evaluation. Manual segmentation is considered the gold standard, but it has limited efficiency. This study aims to develop and evaluate a U-Net-based coarse-to-fine learning framework for automated segmentation and volume measurement of the masseter muscle, providing baseline data on muscle characteristics in 840 healthy East Asian volunteers, while introducing a self-supervised algorithm to reduce the sample size required for deep learning. A database of 840 individuals (253 males, 587 females) with negative head CT scans was utilized. Following G. Power's sample size calculation, 15 cases were randomly chosen for clinical validation. Masseter segmentation was conducted manually in manual group and automatically in Auto-Seg group. The primary endpoint was the masseter muscle volume, while the secondary endpoints included morphological score and runtime, benchmarked against manual segmentation. Reliability tests and paired t tests analyzed intra- and inter-group differences. Additionally, automatic volumetric measurements and asymmetry, calculated as (L - R)/(L±R) × 100%, were evaluated, with the clinical parameter correlation analyzed via Pearson's correlation test. The volume accuracy of automatic segmentation matched that of manual delineation (P > 0.05), demonstrating equivalence. Manual segmentation's runtime (937.3 ± 95.9 s) significantly surpassed the algorithm's (<1 s, p < 0.001). Among 840 patients, masseter asymmetry was 4.6% ± 4.6%, with volumes of (35.5 ± 9.6) cm<sup>3</sup> for adult males and (26.6 ± 7.5) cm3 for adult females. The U-Net-based algorithm demonstrates high concordance with manual segmentation in delineating the masseter muscle, establishing it as a reliable and efficient tool for CT-based assessments in healthy East Asian populations. This journal requires that authors assign a level of evidence to each article. For a full description of these Evidence-Based Medicine ratings, please refer to the Table of Contents or the online  Instructions to Authors   www.springer.com/00266 .

Improved pulmonary embolism detection in CT pulmonary angiogram scans with hybrid vision transformers and deep learning techniques.

Abdelhamid A, El-Ghamry A, Abdelhay EH, Abo-Zahhad MM, Moustafa HE

pubmed logopapersAug 26 2025
Pulmonary embolism (PE) represents a severe, life-threatening cardiovascular condition and is notably the third leading cause of cardiovascular mortality, after myocardial infarction and stroke. This pathology occurs when blood clots obstruct the pulmonary arteries, impeding blood flow and oxygen exchange in the lungs. Prompt and accurate detection of PE is critical for appropriate clinical decision-making and patient survival. The complexity involved in interpreting medical images can often results misdiagnosis. However, recent advances in Deep Learning (DL) have substantially improved the capabilities of Computer-Aided Diagnosis (CAD) systems. Despite these advancements, existing single-model DL methods are limited when handling complex, diverse, and imbalanced medical imaging datasets. Addressing this gap, our research proposes an ensemble framework for classifying PE, capitalizing on the unique capabilities of ResNet50, DenseNet121, and Swin Transformer models. This ensemble method harnesses the complementary strengths of convolutional neural networks (CNNs) and vision transformers (ViTs), leading to improved prediction accuracy and model robustness. The proposed methodology includes a sophisticated preprocessing pipeline leveraging autoencoder (AE)-based dimensionality reduction, data augmentation to avoid overfitting, discrete wavelet transform (DWT) for multiscale feature extraction, and Sobel filtering for effective edge detection and noise reduction. The proposed model was rigorously evaluated using the public Radiological Society of North America (RSNA-STR) PE dataset, demonstrating remarkable performance metrics of 97.80% accuracy and a 0.99 for Area Under Receiver Operating Curve (AUROC). Comparative analysis demonstrated superior performance over state-of-the-art pre-trained models and recent ViT-based approaches, highlighting our method's effectiveness in improving early PE detection and providing robust support for clinical decision-making.

Shining light on degeneracies and uncertainties in quantifying both exchange and restriction with time-dependent diffusion MRI using Bayesian inference

Maëliss Jallais, Quentin Uhl, Tommaso Pavan, Malwina Molendowska, Derek K. Jones, Ileana Jelescu, Marco Palombo

arxiv logopreprintAug 26 2025
Diffusion MRI (dMRI) biophysical models hold promise for characterizing gray matter tissue microstructure. Yet, the reliability of estimated parameters remains largely under-studied, especially in models that incorporate water exchange. In this study, we investigate the accuracy, precision, and presence of degeneracy of two recently proposed gray matter models, NEXI and SANDIX, using two acquisition protocols from the literature, on both simulated and in vivo data. We employ $\mu$GUIDE, a Bayesian inference framework based on deep learning, to quantify model uncertainty and detect parameter degeneracies, enabling a more interpretable assessment of fitted parameters. Our results show that while some microstructural parameters, such as extra-cellular diffusivity and neurite signal fraction, are robustly estimated, others, such as exchange time and soma radius, are often associated with high uncertainty and estimation bias, especially under realistic noise conditions and reduced acquisition protocols. Comparisons with non-linear least squares fitting underscore the added value of uncertainty-aware methods, which allow for the identification and filtering of unreliable estimates. These findings emphasize the need to report uncertainty and consider model degeneracies when interpreting model-based estimates. Our study advocates for the integration of probabilistic fitting approaches in neuroscience imaging pipelines to improve reproducibility and biological interpretability.

AT-CXR: Uncertainty-Aware Agentic Triage for Chest X-rays

Xueyang Li, Mingze Jiang, Gelei Xu, Jun Xia, Mengzhao Jia, Danny Chen, Yiyu Shi

arxiv logopreprintAug 26 2025
Agentic AI is advancing rapidly, yet truly autonomous medical-imaging triage, where a system decides when to stop, escalate, or defer under real constraints, remains relatively underexplored. To address this gap, we introduce AT-CXR, an uncertainty-aware agent for chest X-rays. The system estimates per-case confidence and distributional fit, then follows a stepwise policy to issue an automated decision or abstain with a suggested label for human intervention. We evaluate two router designs that share the same inputs and actions: a deterministic rule-based router and an LLM-decided router. Across five-fold evaluation on a balanced subset of NIH ChestX-ray14 dataset, both variants outperform strong zero-shot vision-language models and state-of-the-art supervised classifiers, achieving higher full-coverage accuracy and superior selective-prediction performance, evidenced by a lower area under the risk-coverage curve (AURC) and a lower error rate at high coverage, while operating with lower latency that meets practical clinical constraints. The two routers provide complementary operating points, enabling deployments to prioritize maximal throughput or maximal accuracy. Our code is available at https://github.com/XLIAaron/uncertainty-aware-cxr-agent.

MedVQA-TREE: A Multimodal Reasoning and Retrieval Framework for Sarcopenia Prediction

Pardis Moradbeiki, Nasser Ghadiri, Sayed Jalal Zahabi, Uffe Kock Wiil, Kristoffer Kittelmann Brockhattingen, Ali Ebrahimi

arxiv logopreprintAug 26 2025
Accurate sarcopenia diagnosis via ultrasound remains challenging due to subtle imaging cues, limited labeled data, and the absence of clinical context in most models. We propose MedVQA-TREE, a multimodal framework that integrates a hierarchical image interpretation module, a gated feature-level fusion mechanism, and a novel multi-hop, multi-query retrieval strategy. The vision module includes anatomical classification, region segmentation, and graph-based spatial reasoning to capture coarse, mid-level, and fine-grained structures. A gated fusion mechanism selectively integrates visual features with textual queries, while clinical knowledge is retrieved through a UMLS-guided pipeline accessing PubMed and a sarcopenia-specific external knowledge base. MedVQA-TREE was trained and evaluated on two public MedVQA datasets (VQA-RAD and PathVQA) and a custom sarcopenia ultrasound dataset. The model achieved up to 99% diagnostic accuracy and outperformed previous state-of-the-art methods by over 10%. These results underscore the benefit of combining structured visual understanding with guided knowledge retrieval for effective AI-assisted diagnosis in sarcopenia.

Stress-testing cross-cancer generalizability of 3D nnU-Net for PET-CT tumor segmentation: multi-cohort evaluation with novel oesophageal and lung cancer datasets

Soumen Ghosh, Christine Jestin Hannan, Rajat Vashistha, Parveen Kundu, Sandra Brosda, Lauren G. Aoude, James Lonie, Andrew Nathanson, Jessica Ng, Andrew P. Barbour, Viktor Vegh

arxiv logopreprintAug 26 2025
Robust generalization is essential for deploying deep learning based tumor segmentation in clinical PET-CT workflows, where anatomical sites, scanners, and patient populations vary widely. This study presents the first cross cancer evaluation of nnU-Net on PET-CT, introducing two novel, expert-annotated whole-body datasets. 279 patients with oesophageal cancer (Australian cohort) and 54 with lung cancer (Indian cohort). These cohorts complement the public AutoPET dataset and enable systematic stress-testing of cross domain performance. We trained and tested 3D nnUNet models under three paradigms. Target only (oesophageal), public only (AutoPET), and combined training. For the tested sets, the oesophageal only model achieved the best in-domain accuracy (mean DSC, 57.8) but failed on external Indian lung cohort (mean DSC less than 3.4), indicating severe overfitting. The public only model generalized more broadly (mean DSC, 63.5 on AutoPET, 51.6 on Indian lung cohort) but underperformed in oesophageal Australian cohort (mean DSC, 26.7). The combined approach provided the most balanced results (mean DSC, lung (52.9), oesophageal (40.7), AutoPET (60.9)), reducing boundary errors and improving robustness across all cohorts. These findings demonstrate that dataset diversity, particularly multi demographic, multi center and multi cancer integration, outweighs architectural novelty as the key driver of robust generalization. This work presents the demography based cross cancer deep learning segmentation evaluation and highlights dataset diversity, rather than model complexity, as the foundation for clinically robust segmentation.
Page 58 of 3463455 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.