Sort by:
Page 4 of 1331322 results

Artificial Intelligence in Ventricular Arrhythmias and Sudden Cardiac Death: A Guide for Clinicians.

Antoun I, Li X, Abdelrazik A, Eldesouky M, Thu KM, Ibrahim M, Dhutia H, Somani R, Ng GA

pubmed logopapersSep 27 2025
Sudden cardiac death (SCD) from ventricular arrhythmias (VAs) remains a leading cause of mortality worldwide. Traditional risk stratification, primarily based on left ventricular ejection fraction (LVEF) and other coarse metrics, often fails to identify a large subset of patients at risk and frequently leads to unnecessary device implantations. Advances in artificial intelligence (AI) offer new strategies to improve both long-term SCD risk prediction and near-term VAs forecasting. In this review, we discuss how AI algorithms applied to the 12-lead electrocardiogram (ECG) can identify subtle risk markers in conditions such as hypertrophic cardiomyopathy (HCM), arrhythmogenic right ventricular cardiomyopathy (ARVC), and coronary artery disease (CAD), often outperforming conventional risk models. We also explore the integration of AI with cardiac imaging, such as scar quantification on cardiac magnetic resonance (CMR) and fibrosis mapping, to enhance the identification of the arrhythmogenic substrate. Furthermore, we investigate the application of data from implantable cardioverter-defibrillators (ICDs) and wearable devices to predict ventricular tachycardia (VT) or ventricular fibrillation (VF) events before they occur, thereby advancing care toward real-time prevention. Amid these innovations, we address the medicolegal and ethical implications of AI-driven automated alerts in arrhythmia care, highlighting when clinicians can trust AI predictions. Future directions include multimodal AI fusion to personalise SCD risk assessment, as well as AI-guided VT ablation planning through imaging-based digital heart models. This review provides a comprehensive overview for general medical readers, focusing on peer-reviewed advances globally in the emerging intersection of AI, VAs, and SCD prevention.

Benchmarking DINOv3 for Multi-Task Stroke Analysis on Non-Contrast CT

Donghao Zhang, Yimin Chen, Kauê TN Duarte, Taha Aslan, Mohamed AlShamrani, Brij Karmur, Yan Wan, Shengcai Chen, Bo Hu, Bijoy K Menon, Wu Qiu

arxiv logopreprintSep 27 2025
Non-contrast computed tomography (NCCT) is essential for rapid stroke diagnosis but is limited by low image contrast and signal to noise ratio. We address this challenge by leveraging DINOv3, a state-of-the-art self-supervised vision transformer, to generate powerful feature representations for a comprehensive set of stroke analysis tasks. Our evaluation encompasses infarct and hemorrhage segmentation, anomaly classification (normal vs. stroke and normal vs. infarct vs. hemorrhage), hemorrhage subtype classification (EDH, SDH, SAH, IPH, IVH), and dichotomized ASPECTS classification (<=6 vs. >6) on multiple public and private datasets. This study establishes strong benchmarks for these tasks and demonstrates the potential of advanced self-supervised models to improve automated stroke diagnosis from NCCT, providing a clear analysis of both the advantages and current constraints of the approach. The code is available at https://github.com/Zzz0251/DINOv3-stroke.

Johnson-Lindenstrauss Lemma Guided Network for Efficient 3D Medical Segmentation

Jinpeng Lu, Linghan Cai, Yinda Chen, Guo Tang, Songhan Jiang, Haoyuan Shi, Zhiwei Xiong

arxiv logopreprintSep 26 2025
Lightweight 3D medical image segmentation remains constrained by a fundamental "efficiency / robustness conflict", particularly when processing complex anatomical structures and heterogeneous modalities. In this paper, we study how to redesign the framework based on the characteristics of high-dimensional 3D images, and explore data synergy to overcome the fragile representation of lightweight methods. Our approach, VeloxSeg, begins with a deployable and extensible dual-stream CNN-Transformer architecture composed of Paired Window Attention (PWA) and Johnson-Lindenstrauss lemma-guided convolution (JLC). For each 3D image, we invoke a "glance-and-focus" principle, where PWA rapidly retrieves multi-scale information, and JLC ensures robust local feature extraction with minimal parameters, significantly enhancing the model's ability to operate with low computational budget. Followed by an extension of the dual-stream architecture that incorporates modal interaction into the multi-scale image-retrieval process, VeloxSeg efficiently models heterogeneous modalities. Finally, Spatially Decoupled Knowledge Transfer (SDKT) via Gram matrices injects the texture prior extracted by a self-supervised network into the segmentation network, yielding stronger representations than baselines at no extra inference cost. Experimental results on multimodal benchmarks show that VeloxSeg achieves a 26% Dice improvement, alongside increasing GPU throughput by 11x and CPU by 48x. Codes are available at https://github.com/JinPLu/VeloxSeg.

Automated deep learning method for whole-breast segmentation in contrast-free quantitative MRI.

Gao W, Zhang Y, Gao B, Xia Y, Liang W, Yang Q, Shi F, He T, Han G, Li X, Su X, Zhang Y

pubmed logopapersSep 26 2025
To develop a deep learning segmentation method utilizing the nnU-Net architecture for fully automated whole-breast segmentation based on diffusion-weighted imaging (DWI) and synthetic MRI (SyMRI) images. A total of 98 patients with 196 breasts were evaluated. All patients underwent 3.0T magnetic resonance (MR) examinations, which incorporated DWI and SyMRI techniques. The ground truth for breast segmentation was established through a manual, slice-by-slice approach performed by two experienced radiologists. The U-Net and nnU-Net deep learning algorithms were employed to segment the whole-breast. Performance was evaluated using various metrics, including the Dice Similarity Coefficient (DSC), accuracy, and Pearson's correlation coefficient. For DWI and proton density (PD) of SyMRI, the nnU-Net outperformed the U-Net achieving the higher DSC in both the testing set (DWI, 0.930 ± 0.029 vs. 0.785 ± 0.161; PD, 0.969 ± 0.010 vs. 0.936 ± 0.018) and independent testing set (DWI, 0.953 ± 0.019 vs. 0.789 ± 0.148; PD, 0.976 ± 0.008 vs. 0.939 ± 0.018). The PD of SyMRI exhibited better performance than DWI, attaining the highest DSC and accuracy. The correlation coefficients R² for nnU-Net were 0.99 ~ 1.00 for DWI and PD, significantly surpassing the performance of U-Net. The nnU-Net exhibited exceptional segmentation performance for fully automated breast segmentation of contrast-free quantitative images. This method serves as an effective tool for processing large-scale clinical datasets and represents a significant advancement toward computer-aided quantitative analysis of breast DWI and SyMRI images.

Deep learning-based cardiac computed tomography angiography left atrial segmentation and quantification in atrial fibrillation patients: a multi-model comparative study.

Feng L, Lu W, Liu J, Chen Z, Jin J, Qian N, Pan J, Wang L, Xiang J, Jiang J, Wang Y

pubmed logopapersSep 26 2025
Quantitative assessment of left atrial volume (LAV) is an important factor in the study of the pathogenesis of atrial fibrillation. However, automated left atrial segmentation with quantitative assessment usually faces many challenges. The main objective of this study was to find the optimal left atrial segmentation model based on cardiac computed tomography angiography (CTA) and to perform quantitative LAV measurement. A multi-center left atrial study cohort containing 182 cardiac CTAs with atrial fibrillation was created, each case accompanied by expert image annotation by a cardiologist. Then, based on this left atrium dataset, five recent states-of-the-art (SOTA) models in the field of medical image segmentation were used to train and validate the left atrium segmentation model, including DAResUNet, nnFormer, xLSTM-UNet, UNETR, and VNet, respectively. Further, the optimal segmentation model was used to assess the consistency validation of the LAV. DAResUNet achieved the best performance in DSC (0.924 ± 0.023) and JI (0.859 ± 0.065) among all models, while VNet is the best performer in HD (12.457 ± 6.831) and ASD (1.034 ± 0.178). The Bland-Altman plot demonstrated the extremely strong agreement (mean bias - 5.69 mL, 95% LoA - 19-7.6 mL) between the model's automatic prediction and manual measurements. Deep learning models based on a study cohort of 182 CTA left atrial images were capable of achieving competitive results in left atrium segmentation. LAV assessment based on deep learning models may be useful for biomarkers of the onset of atrial fibrillation.

Exploring learning transferability in deep segmentation of colorectal cancer liver metastases.

Abbas M, Badic B, Andrade-Miranda G, Bourbonne V, Jaouen V, Visvikis D, Conze PH

pubmed logopapersSep 26 2025
Ensuring the seamless transfer of knowledge and models across various datasets and clinical contexts is of paramount importance in medical image segmentation. This is especially true for liver lesion segmentation which plays a key role in pre-operative planning and treatment follow-up. Despite the progress of deep learning algorithms using Transformers, automatically segmenting small hepatic metastases remains a persistent challenge. This can be attributed to the degradation of small structures due to the intrinsic process of feature down-sampling inherent to many deep architectures, coupled with the imbalance between foreground metastases voxels and background. While similar challenges have been observed for liver tumors originated from hepatocellular carcinoma, their manifestation in the context of liver metastasis delineation remains under-explored and require well-defined guidelines. Through comprehensive experiments, this paper aims to bridge this gap and to demonstrate the impact of various transfer learning schemes from off-the-shelf datasets to a dataset containing liver metastases only. Our scale-specific evaluation reveals that models trained from scratch or with domain-specific pre-training demonstrate greater proficiency.

Theranostics in nuclear medicine: the era of precision oncology.

Gandhi N, Alaseem AM, Deshmukh R, Patel A, Alsaidan OA, Fareed M, Alasiri G, Patel S, Prajapati B

pubmed logopapersSep 26 2025
Theranostics represents a transformative advancement in nuclear medicine by integrating molecular imaging and targeted radionuclide therapy within the paradigm of personalized oncology. This review elucidates the historical evolution and contemporary clinical applications of theranostics, emphasizing its pivotal role in precision cancer management. The theranostic approach involves the coupling of diagnostic and therapeutic radionuclides that target identical molecular biomarkers, enabling simultaneous visualization and treatment of malignancies such as neuroendocrine tumors (NETs), prostate cancer, and differentiated thyroid carcinoma. Key theranostic radiopharmaceutical pairs, including Gallium-68-labeled DOTA-Tyr3-octreotate (Ga-68-DOTATATE) with Lutetium-177-labeled DOTA-Tyr3-octreotate (Lu-177-DOTATATE), and Gallium-68-labeled Prostate-Specific Membrane Antigen (Ga-68-PSMA) with Lutetium-177-labeled Prostate-Specific Membrane Antigen (Lu-177-PSMA), exemplify the "see-and-treat" principle central to this modality. This article further explores critical molecular targets such as somatostatin receptor subtype 2, prostate-specific membrane antigen, human epidermal growth factor receptor 2, CD20, and C-X-C chemokine receptor type 4, along with design principles for radiopharmaceuticals that optimize target specificity while minimizing off-target toxicity. Advances in imaging platforms, including positron emission tomography/computed tomography (PET/CT), single-photon emission computed tomography/CT (SPECT/CT), and hybrid positron emission tomography/magnetic resonance imaging (PET/MRI), have been instrumental in accurate dosimetry, therapeutic response assessment, and adaptive treatment planning. Integration of artificial intelligence (AI) and radiomics holds promise for enhanced image segmentation, predictive modeling, and individualized dosimetric planning. The review also addresses regulatory, manufacturing, and economic considerations, including guidelines from the United States Food and Drug Administration (USFDA) and European Medicines Agency (EMA), Good Manufacturing Practice (GMP) standards, and reimbursement frameworks, which collectively influence global adoption of theranostics. In summary, theranostics is poised to become a cornerstone of next-generation oncology, catalyzing a paradigm shift toward biologically driven, real-time personalized cancer care that seamlessly links diagnosis and therapy.

Segmental airway volume as a predictive indicator of postoperative extubation timing in patients with oral and maxillofacial space infections: a retrospective analysis.

Liu S, Shen H, Zhu B, Zhang X, Zhang X, Li W

pubmed logopapersSep 26 2025
The objective of this study was to investigate the significance of segmental airway volume in developing a predictive model to guide the timing of postoperative extubation in patients with oral and maxillofacial space infections (OMSIs). A retrospective cohort study was performed to analyse clinical data from 177 medical records, with a focus on key variables related to disease severity and treatment outcomes. The inclusion criteria of this study were as follows: adherence to the OMSI diagnostic criteria (local tissue inflammation characterized by erythema, oedema, hyperthermia and tenderness); compromised functions such as difficulties opening the mouth, swallowing, or breathing; the presence of purulent material confirmed by puncture or computed tomography (CT); and laboratory examinations indicating an underlying infection process. The data included age, sex, body mass index (BMI), blood test results, smoking history, history of alcohol abuse, the extent of mouth opening, the number of infected spaces, and the source of infection. DICOM files were imported into 3D Slicer for manual segmentation, followed by volume measurement of each segment. We observed statistically significant differences in age, neutrophil count, lymphocyte count, and C4 segment volume among patient subgroups stratified by extubation time. Regression analysis revealed that age and C4 segment volume were significantly correlated with extubation time. Additionally, the machine learning models yielded good evaluation metrics. Segmental airway volume shows promise as an indicator for predicting extubation time. Predictive models constructed using machine learning algorithms yield good predictive performance and may facilitate clinical decision-making.

RAU: Reference-based Anatomical Understanding with Vision Language Models

Yiwei Li, Yikang Liu, Jiaqi Guo, Lin Zhao, Zheyuan Zhang, Xiao Chen, Boris Mailhe, Ankush Mukherjee, Terrence Chen, Shanhui Sun

arxiv logopreprintSep 26 2025
Anatomical understanding through deep learning is critical for automatic report generation, intra-operative navigation, and organ localization in medical imaging; however, its progress is constrained by the scarcity of expert-labeled data. A promising remedy is to leverage an annotated reference image to guide the interpretation of an unlabeled target. Although recent vision-language models (VLMs) exhibit non-trivial visual reasoning, their reference-based understanding and fine-grained localization remain limited. We introduce RAU, a framework for reference-based anatomical understanding with VLMs. We first show that a VLM learns to identify anatomical regions through relative spatial reasoning between reference and target images, trained on a moderately sized dataset. We validate this capability through visual question answering (VQA) and bounding box prediction. Next, we demonstrate that the VLM-derived spatial cues can be seamlessly integrated with the fine-grained segmentation capability of SAM2, enabling localization and pixel-level segmentation of small anatomical regions, such as vessel segments. Across two in-distribution and two out-of-distribution datasets, RAU consistently outperforms a SAM2 fine-tuning baseline using the same memory setup, yielding more accurate segmentations and more reliable localization. More importantly, its strong generalization ability makes it scalable to out-of-distribution datasets, a property crucial for medical image applications. To the best of our knowledge, RAU is the first to explore the capability of VLMs for reference-based identification, localization, and segmentation of anatomical structures in medical images. Its promising performance highlights the potential of VLM-driven approaches for anatomical understanding in automated clinical workflows.

Bézier Meets Diffusion: Robust Generation Across Domains for Medical Image Segmentation

Chen Li, Meilong Xu, Xiaoling Hu, Weimin Lyu, Chao Chen

arxiv logopreprintSep 26 2025
Training robust learning algorithms across different medical imaging modalities is challenging due to the large domain gap. Unsupervised domain adaptation (UDA) mitigates this problem by using annotated images from the source domain and unlabeled images from the target domain to train the deep models. Existing approaches often rely on GAN-based style transfer, but these methods struggle to capture cross-domain mappings in regions with high variability. In this paper, we propose a unified framework, B\'ezier Meets Diffusion, for cross-domain image generation. First, we introduce a B\'ezier-curve-based style transfer strategy that effectively reduces the domain gap between source and target domains. The transferred source images enable the training of a more robust segmentation model across domains. Thereafter, using pseudo-labels generated by this segmentation model on the target domain, we train a conditional diffusion model (CDM) to synthesize high-quality, labeled target-domain images. To mitigate the impact of noisy pseudo-labels, we further develop an uncertainty-guided score matching method that improves the robustness of CDM training. Extensive experiments on public datasets demonstrate that our approach generates realistic labeled images, significantly augmenting the target domain and improving segmentation performance.
Page 4 of 1331322 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.