Sort by:
Page 201 of 3623611 results

Novel CAC Dispersion and Density Score to Predict Myocardial Infarction and Cardiovascular Mortality.

Huangfu G, Ihdayhid AR, Kwok S, Konstantopoulos J, Niu K, Lu J, Smallbone H, Figtree GA, Chow CK, Dembo L, Adler B, Hamilton-Craig C, Grieve SM, Chan MTV, Butler C, Tandon V, Nagele P, Woodard PK, Mrkobrada M, Szczeklik W, Aziz YFA, Biccard B, Devereaux PJ, Sheth T, Dwivedi G, Chow BJW

pubmed logopapersJul 4 2025
Coronary artery calcification (CAC) provides robust prediction for major adverse cardiovascular events (MACE), but current techniques disregard plaque distribution and protective effects of high CAC density. We investigated whether a novel CAC-dispersion and density (CAC-DAD) score will exhibit superior prognostic value compared with the Agatston score (AS) for MACE prediction. We conducted a multicenter, retrospective, cross-sectional study of 961 patients (median age, 67 years; 61% male) who underwent cardiac computed tomography for cardiovascular or perioperative risk assessment. Blinded analyzers applied deep learning algorithms to noncontrast scans to calculate the CAC-DAD score, which adjusts for the spatial distribution of CAC and assigns a protective weight factor for lesions with ≥1000 Hounsfield units. Associations were assessed using frailty regression. Over a median follow-up of 30 (30-460) days, 61 patients experienced MACE (nonfatal myocardial infarction or cardiovascular mortality). An elevated CAC-DAD score (≥2050 based on optimal cutoff) captured more MACE than AS ≥400 (74% versus 57%; <i>P</i>=0.002). Univariable analysis revealed that an elevated CAC-DAD score, AS ≥400 and AS ≥100, age, diabetes, hypertension, and statin use predicted MACE. On multivariable analysis, only the CAC-DAD score (hazard ratio, 2.57 [95% CI, 1.43-4.61]; <i>P</i>=0.002), age, statins, and diabetes remained significant. The inclusion of the CAC-DAD score in a predictive model containing demographic factors and AS improved the C statistic from 0.61 to 0.66 (<i>P</i>=0.008). The fully automated CAC-DAD score improves MACE prediction compared with the AS. Patients with a high CAC-DAD score, including those with a low AS, may be at higher risk and warrant intensification of their preventative therapies.

Hybrid-View Attention Network for Clinically Significant Prostate Cancer Classification in Transrectal Ultrasound

Zetian Feng, Juan Fu, Xuebin Zou, Hongsheng Ye, Hong Wu, Jianhua Zhou, Yi Wang

arxiv logopreprintJul 4 2025
Prostate cancer (PCa) is a leading cause of cancer-related mortality in men, and accurate identification of clinically significant PCa (csPCa) is critical for timely intervention. Transrectal ultrasound (TRUS) is widely used for prostate biopsy; however, its low contrast and anisotropic spatial resolution pose diagnostic challenges. To address these limitations, we propose a novel hybrid-view attention (HVA) network for csPCa classification in 3D TRUS that leverages complementary information from transverse and sagittal views. Our approach integrates a CNN-transformer hybrid architecture, where convolutional layers extract fine-grained local features and transformer-based HVA models global dependencies. Specifically, the HVA comprises intra-view attention to refine features within a single view and cross-view attention to incorporate complementary information across views. Furthermore, a hybrid-view adaptive fusion module dynamically aggregates features along both channel and spatial dimensions, enhancing the overall representation. Experiments are conducted on an in-house dataset containing 590 subjects who underwent prostate biopsy. Comparative and ablation results prove the efficacy of our method. The code is available at https://github.com/mock1ngbrd/HVAN.

Identifying features of prior hemorrhage in cerebral cavernous malformations on quantitative susceptibility maps: a machine learning pilot study.

Kinkade S, Li H, Hage S, Koskimäki J, Stadnik A, Lee J, Shenkar R, Papaioannou J, Flemming KD, Kim H, Torbey M, Huang J, Carroll TJ, Girard R, Giger ML, Awad IA

pubmed logopapersJul 4 2025
Features of new bleeding on conventional imaging in cerebral cavernous malformations (CCMs) often disappear after several weeks, yet the risk of rebleeding persists long thereafter. Increases in mean lesional quantitative susceptibility mapping (QSM) ≥ 6% on MRI during 1 year of prospective surveillance have been associated with new symptomatic hemorrhage (SH) during that period. The authors hypothesized that QSM at a single time point reflects features of hemorrhage in the prior year or potential bleeding in the subsequent year. Twenty-eight features were extracted from 265 QSM acquisitions in 120 patients enrolled in a prospective trial readiness project, and machine learning methods examined associations with SH and biomarker bleed (QSM increase ≥ 6%) in prior and subsequent years. QSM features including sum variance, variance, and correlation had lower average values in lesions with SH in the prior year (p < 0.05, false discovery rate corrected). A support-vector machine classifier recurrently selected sum average, mean lesional QSM, sphericity, and margin sharpness features to distinguish biomarker bleeds in the prior year (area under the curve = 0.61, 95% CI 0.52-0.70; p = 0.02). No QSM features were associated with a subsequent bleed. These results provide proof of concept that machine learning may derive features of QSM reflecting prior hemorrhagic activity, meriting further investigation. Clinical trial registration no.: NCT03652181 (ClinicalTrials.gov).

Adaptive Gate-Aware Mamba Networks for Magnetic Resonance Fingerprinting

Tianyi Ding, Hongli Chen, Yang Gao, Zhuang Xiong, Feng Liu, Martijn A. Cloos, Hongfu Sun

arxiv logopreprintJul 4 2025
Magnetic Resonance Fingerprinting (MRF) enables fast quantitative imaging by matching signal evolutions to a predefined dictionary. However, conventional dictionary matching suffers from exponential growth in computational cost and memory usage as the number of parameters increases, limiting its scalability to multi-parametric mapping. To address this, recent work has explored deep learning-based approaches as alternatives to DM. We propose GAST-Mamba, an end-to-end framework that combines a dual Mamba-based encoder with a Gate-Aware Spatial-Temporal (GAST) processor. Built on structured state-space models, our architecture efficiently captures long-range spatial dependencies with linear complexity. On 5 times accelerated simulated MRF data (200 frames), GAST-Mamba achieved a T1 PSNR of 33.12~dB, outperforming SCQ (31.69~dB). For T2 mapping, it reached a PSNR of 30.62~dB and SSIM of 0.9124. In vivo experiments further demonstrated improved anatomical detail and reduced artifacts. Ablation studies confirmed that each component contributes to performance, with the GAST module being particularly important under strong undersampling. These results demonstrate the effectiveness of GAST-Mamba for accurate and robust reconstruction from highly undersampled MRF acquisitions, offering a scalable alternative to traditional DM-based methods.

Hybrid-View Attention for csPCa Classification in TRUS

Zetian Feng, Juan Fu, Xuebin Zou, Hongsheng Ye, Hong Wu, Jianhua Zhou, Yi Wang

arxiv logopreprintJul 4 2025
Prostate cancer (PCa) is a leading cause of cancer-related mortality in men, and accurate identification of clinically significant PCa (csPCa) is critical for timely intervention. Transrectal ultrasound (TRUS) is widely used for prostate biopsy; however, its low contrast and anisotropic spatial resolution pose diagnostic challenges. To address these limitations, we propose a novel hybrid-view attention (HVA) network for csPCa classification in 3D TRUS that leverages complementary information from transverse and sagittal views. Our approach integrates a CNN-transformer hybrid architecture, where convolutional layers extract fine-grained local features and transformer-based HVA models global dependencies. Specifically, the HVA comprises intra-view attention to refine features within a single view and cross-view attention to incorporate complementary information across views. Furthermore, a hybrid-view adaptive fusion module dynamically aggregates features along both channel and spatial dimensions, enhancing the overall representation. Experiments are conducted on an in-house dataset containing 590 subjects who underwent prostate biopsy. Comparative and ablation results prove the efficacy of our method. The code is available at https://github.com/mock1ngbrd/HVAN.

PhotIQA: A photoacoustic image data set with image quality ratings

Anna Breger, Janek Gröhl, Clemens Karner, Thomas R Else, Ian Selby, Jonathan Weir-McCall, Carola-Bibiane Schönlieb

arxiv logopreprintJul 4 2025
Image quality assessment (IQA) is crucial in the evaluation stage of novel algorithms operating on images, including traditional and machine learning based methods. Due to the lack of available quality-rated medical images, most commonly used IQA methods employing reference images (i.e. full-reference IQA) have been developed and tested for natural images. Reported application inconsistencies arising when employing such measures for medical images are not surprising, as they rely on different properties than natural images. In photoacoustic imaging (PAI), especially, standard benchmarking approaches for assessing the quality of image reconstructions are lacking. PAI is a multi-physics imaging modality, in which two inverse problems have to be solved, which makes the application of IQA measures uniquely challenging due to both, acoustic and optical, artifacts. To support the development and testing of full- and no-reference IQA measures we assembled PhotIQA, a data set consisting of 1134 reconstructed photoacoustic (PA) images that were rated by 2 experts across five quality properties (overall quality, edge visibility, homogeneity, inclusion and background intensity), where the detailed rating enables usage beyond PAI. To allow full-reference assessment, highly characterised imaging test objects were used, providing a ground truth. Our baseline experiments show that HaarPSI$_{med}$ significantly outperforms SSIM in correlating with the quality ratings (SRCC: 0.83 vs. 0.62). The dataset is publicly available at https://doi.org/10.5281/zenodo.13325196.

Causal-SAM-LLM: Large Language Models as Causal Reasoners for Robust Medical Segmentation

Tao Tang, Shijie Xu, Yiting Wu, Zhixiang Lu

arxiv logopreprintJul 4 2025
The clinical utility of deep learning models for medical image segmentation is severely constrained by their inability to generalize to unseen domains. This failure is often rooted in the models learning spurious correlations between anatomical content and domain-specific imaging styles. To overcome this fundamental challenge, we introduce Causal-SAM-LLM, a novel framework that elevates Large Language Models (LLMs) to the role of causal reasoners. Our framework, built upon a frozen Segment Anything Model (SAM) encoder, incorporates two synergistic innovations. First, Linguistic Adversarial Disentanglement (LAD) employs a Vision-Language Model to generate rich, textual descriptions of confounding image styles. By training the segmentation model's features to be contrastively dissimilar to these style descriptions, it learns a representation robustly purged of non-causal information. Second, Test-Time Causal Intervention (TCI) provides an interactive mechanism where an LLM interprets a clinician's natural language command to modulate the segmentation decoder's features in real-time, enabling targeted error correction. We conduct an extensive empirical evaluation on a composite benchmark from four public datasets (BTCV, CHAOS, AMOS, BraTS), assessing generalization under cross-scanner, cross-modality, and cross-anatomy settings. Causal-SAM-LLM establishes a new state of the art in out-of-distribution (OOD) robustness, improving the average Dice score by up to 6.2 points and reducing the Hausdorff Distance by 15.8 mm over the strongest baseline, all while using less than 9% of the full model's trainable parameters. Our work charts a new course for building robust, efficient, and interactively controllable medical AI systems.

SAMed-2: Selective Memory Enhanced Medical Segment Anything Model

Zhiling Yan, Sifan Song, Dingjie Song, Yiwei Li, Rong Zhou, Weixiang Sun, Zhennong Chen, Sekeun Kim, Hui Ren, Tianming Liu, Quanzheng Li, Xiang Li, Lifang He, Lichao Sun

arxiv logopreprintJul 4 2025
Recent "segment anything" efforts show promise by learning from large-scale data, but adapting such models directly to medical images remains challenging due to the complexity of medical data, noisy annotations, and continual learning requirements across diverse modalities and anatomical structures. In this work, we propose SAMed-2, a new foundation model for medical image segmentation built upon the SAM-2 architecture. Specifically, we introduce a temporal adapter into the image encoder to capture image correlations and a confidence-driven memory mechanism to store high-certainty features for later retrieval. This memory-based strategy counters the pervasive noise in large-scale medical datasets and mitigates catastrophic forgetting when encountering new tasks or modalities. To train and evaluate SAMed-2, we curate MedBank-100k, a comprehensive dataset spanning seven imaging modalities and 21 medical segmentation tasks. Our experiments on both internal benchmarks and 10 external datasets demonstrate superior performance over state-of-the-art baselines in multi-task scenarios. The code is available at: https://github.com/ZhilingYan/Medical-SAM-Bench.

ChestGPT: Integrating Large Language Models and Vision Transformers for Disease Detection and Localization in Chest X-Rays

Shehroz S. Khan, Petar Przulj, Ahmed Ashraf, Ali Abedi

arxiv logopreprintJul 4 2025
The global demand for radiologists is increasing rapidly due to a growing reliance on medical imaging services, while the supply of radiologists is not keeping pace. Advances in computer vision and image processing technologies present significant potential to address this gap by enhancing radiologists' capabilities and improving diagnostic accuracy. Large language models (LLMs), particularly generative pre-trained transformers (GPTs), have become the primary approach for understanding and generating textual data. In parallel, vision transformers (ViTs) have proven effective at converting visual data into a format that LLMs can process efficiently. In this paper, we present ChestGPT, a deep-learning framework that integrates the EVA ViT with the Llama 2 LLM to classify diseases and localize regions of interest in chest X-ray images. The ViT converts X-ray images into tokens, which are then fed, together with engineered prompts, into the LLM, enabling joint classification and localization of diseases. This approach incorporates transfer learning techniques to enhance both explainability and performance. The proposed method achieved strong global disease classification performance on the VinDr-CXR dataset, with an F1 score of 0.76, and successfully localized pathologies by generating bounding boxes around the regions of interest. We also outline several task-specific prompts, in addition to general-purpose prompts, for scenarios radiologists might encounter. Overall, this framework offers an assistive tool that can lighten radiologists' workload by providing preliminary findings and regions of interest to facilitate their diagnostic process.

ViT-GCN: A Novel Hybrid Model for Accurate Pneumonia Diagnosis from X-ray Images.

Xu N, Wu J, Cai F, Li X, Xie HB

pubmed logopapersJul 4 2025
This study aims to enhance the accuracy of pneumonia diagnosis from X-ray images by developing a model that integrates Vision Transformer (ViT) and Graph Convolutional Networks (GCN) for improved feature extraction and diagnostic performance. The ViT-GCN model was designed to leverage the strengths of both ViT, which captures global image information by dividing the image into fixed-size patches and processing them in sequence, and GCN, which captures node features and relationships through message passing and aggregation in graph data. A composite loss function combining multivariate cross-entropy, focal loss, and GHM loss was introduced to address dataset imbalance and improve training efficiency on small datasets. The ViT-GCN model demonstrated superior performance, achieving an accuracy of 91.43\% on the COVID-19 chest X-ray database, surpassing existing models in diagnostic accuracy for pneumonia. The study highlights the effectiveness of combining ViT and GCN architectures in medical image diagnosis, particularly in addressing challenges related to small datasets. This approach can lead to more accurate and efficient pneumonia diagnoses, especially in resource-constrained settings where small datasets are common.
Page 201 of 3623611 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.