Sort by:
Page 89 of 99986 results

Effect of low-dose colchicine on pericoronary inflammation and coronary plaque composition in chronic coronary disease: a subanalysis of the LoDoCo2 trial.

Fiolet ATL, Lin A, Kwiecinski J, Tutein Nolthenius J, McElhinney P, Grodecki K, Kietselaer B, Opstal TS, Cornel JH, Knol RJ, Schaap J, Aarts RAHM, Tutein Nolthenius AMFA, Nidorf SM, Velthuis BK, Dey D, Mosterd A

pubmed logopapersMay 19 2025
Low-dose colchicine (0.5 mg once daily) reduces the risk of major cardiovascular events in coronary disease, but its mechanism of action is not yet fully understood. We investigated whether low-dose colchicine is associated with changes in pericoronary inflammation and plaque composition in patients with chronic coronary disease. We performed a cross-sectional, nationwide, subanalysis of the Low-Dose Colchicine 2 Trial (LoDoCo2, n=5522). CT angiography studies were performed in 151 participants randomised to colchicine or placebo coronary after a median treatment duration of 28.2 months. Pericoronary adipose tissue (PCAT) attenuation measurements around proximal coronary artery segments and quantitative plaque analysis for the entire coronary tree were performed using artificial intelligence-enabled plaque analysis software. Median PCAT attenuation was not significantly different between the two groups (-79.5 Hounsfield units (HU) for colchicine versus -78.7 HU for placebo, p=0.236). Participants assigned to colchicine had a higher volume (169.6 mm<sup>3</sup> vs 113.1 mm<sup>3</sup>, p=0.041) and burden (9.6% vs 7.0%, p=0.035) of calcified plaque, and higher volume of dense calcified plaque (192.8 mm<sup>3</sup> vs 144.3 mm<sup>3</sup>, p=0.048) compared with placebo, independent of statin therapy. Colchicine treatment was associated with a lower burden of low-attenuation plaque in participants on a low-intensity statin, but not in those on a high-intensity statin (p<sub>interaction</sub>=0.037). Pericoronary inflammation did not differ among participants who received low-dose colchicine compared with placebo. Low-dose colchicine was associated with a higher volume of calcified plaque, particularly dense calcified plaque, which is considered a feature of plaque stability.

Segmentation of temporomandibular joint structures on mri images using neural networks for diagnosis of pathologies

Maksim I. Ivanov, Olga E. Mendybaeva, Yuri E. Karyakin, Igor N. Glukhikh, Aleksey V. Lebedev

arxiv logopreprintMay 19 2025
This article explores the use of artificial intelligence for the diagnosis of pathologies of the temporomandibular joint (TMJ), in particular, for the segmentation of the articular disc on MRI images. The relevance of the work is due to the high prevalence of TMJ pathologies, as well as the need to improve the accuracy and speed of diagnosis in medical institutions. During the study, the existing solutions (Diagnocat, MandSeg) were analyzed, which, as a result, are not suitable for studying the articular disc due to the orientation towards bone structures. To solve the problem, an original dataset was collected from 94 images with the classes "temporomandibular joint" and "jaw". To increase the amount of data, augmentation methods were used. After that, the models of U-Net, YOLOv8n, YOLOv11n and Roboflow neural networks were trained and compared. The evaluation was carried out according to the Dice Score, Precision, Sensitivity, Specificity, and Mean Average Precision metrics. The results confirm the potential of using the Roboflow model for segmentation of the temporomandibular joint. In the future, it is planned to develop an algorithm for measuring the distance between the jaws and determining the position of the articular disc, which will improve the diagnosis of TMJ pathologies.

Current trends and emerging themes in utilizing artificial intelligence to enhance anatomical diagnostic accuracy and efficiency in radiotherapy.

Pezzino S, Luca T, Castorina M, Puleo S, Castorina S

pubmed logopapersMay 19 2025
Artificial intelligence (AI) incorporation into healthcare has proven revolutionary, especially in radiotherapy, where accuracy is critical. The purpose of the study is to present patterns and develop topics in the application of AI to improve the precision of anatomical diagnosis, delineation of organs, and therapeutic effectiveness in radiation and radiological imaging. We performed a bibliometric analysis of scholarly articles in the fields starting in 2014. Through an examination of research output from key contributing nations and institutions, an analysis of notable research subjects, and an investigation of trends in scientific terminology pertaining to AI in radiology and radiotherapy. Furthermore, we examined software solutions based on AI in these domains, with a specific emphasis on extracting anatomical features and recognizing organs for the purpose of treatment planning. Our investigation found a significant surge in papers pertaining to AI in the fields since 2014. Institutions such as Emory University and Memorial Sloan-Kettering Cancer Center made substantial contributions to the development of the United States and China as leading research-producing nations. Key study areas encompassed adaptive radiation informed by anatomical alterations, MR-Linac for enhanced vision of soft tissues, and multi-organ segmentation for accurate planning of radiotherapy. An evident increase in the frequency of phrases such as 'radiomics,' 'radiotherapy segmentation,' and 'dosiomics' was noted. The evaluation of AI-based software revealed a wide range of uses in several subdisciplinary fields of radiation and radiology, particularly in improving the identification of anatomical features for treatment planning and identifying organs at risk. The incorporation of AI in anatomical diagnosis in radiological imaging and radiotherapy is progressing rapidly, with substantial capacity to transform the precision of diagnoses and the effectiveness of treatment planning.

SMURF: Scalable method for unsupervised reconstruction of flow in 4D flow MRI

Atharva Hans, Abhishek Singh, Pavlos Vlachos, Ilias Bilionis

arxiv logopreprintMay 18 2025
We introduce SMURF, a scalable and unsupervised machine learning method for simultaneously segmenting vascular geometries and reconstructing velocity fields from 4D flow MRI data. SMURF models geometry and velocity fields using multilayer perceptron-based functions incorporating Fourier feature embeddings and random weight factorization to accelerate convergence. A measurement model connects these fields to the observed image magnitude and phase data. Maximum likelihood estimation and subsampling enable SMURF to process high-dimensional datasets efficiently. Evaluations on synthetic, in vitro, and in vivo datasets demonstrate SMURF's performance. On synthetic internal carotid artery aneurysm data derived from CFD, SMURF achieves a quarter-voxel segmentation accuracy across noise levels of up to 50%, outperforming the state-of-the-art segmentation method by up to double the accuracy. In an in vitro experiment on Poiseuille flow, SMURF reduces velocity reconstruction RMSE by approximately 34% compared to raw measurements. In in vivo internal carotid artery aneurysm data, SMURF attains nearly half-voxel segmentation accuracy relative to expert annotations and decreases median velocity divergence residuals by about 31%, with a 27% reduction in the interquartile range. These results indicate that SMURF is robust to noise, preserves flow structure, and identifies patient-specific morphological features. SMURF advances 4D flow MRI accuracy, potentially enhancing the diagnostic utility of 4D flow MRI in clinical applications.

Attention-Enhanced U-Net for Accurate Segmentation of COVID-19 Infected Lung Regions in CT Scans

Amal Lahchim, Lazar Davic

arxiv logopreprintMay 18 2025
In this study, we propose a robust methodology for automatic segmentation of infected lung regions in COVID-19 CT scans using convolutional neural networks. The approach is based on a modified U-Net architecture enhanced with attention mechanisms, data augmentation, and postprocessing techniques. It achieved a Dice coefficient of 0.8658 and mean IoU of 0.8316, outperforming other methods. The dataset was sourced from public repositories and augmented for diversity. Results demonstrate superior segmentation performance. Future work includes expanding the dataset, exploring 3D segmentation, and preparing the model for clinical deployment.

Mutual Evidential Deep Learning for Medical Image Segmentation

Yuanpeng He, Yali Bi, Lijian Li, Chi-Man Pun, Wenpin Jiao, Zhi Jin

arxiv logopreprintMay 18 2025
Existing semi-supervised medical segmentation co-learning frameworks have realized that model performance can be diminished by the biases in model recognition caused by low-quality pseudo-labels. Due to the averaging nature of their pseudo-label integration strategy, they fail to explore the reliability of pseudo-labels from different sources. In this paper, we propose a mutual evidential deep learning (MEDL) framework that offers a potentially viable solution for pseudo-label generation in semi-supervised learning from two perspectives. First, we introduce networks with different architectures to generate complementary evidence for unlabeled samples and adopt an improved class-aware evidential fusion to guide the confident synthesis of evidential predictions sourced from diverse architectural networks. Second, utilizing the uncertainty in the fused evidence, we design an asymptotic Fisher information-based evidential learning strategy. This strategy enables the model to initially focus on unlabeled samples with more reliable pseudo-labels, gradually shifting attention to samples with lower-quality pseudo-labels while avoiding over-penalization of mislabeled classes in high data uncertainty samples. Additionally, for labeled data, we continue to adopt an uncertainty-driven asymptotic learning strategy, gradually guiding the model to focus on challenging voxels. Extensive experiments on five mainstream datasets have demonstrated that MEDL achieves state-of-the-art performance.

A Robust Automated Segmentation Method for White Matter Hyperintensity of Vascular-origin.

He H, Jiang J, Peng S, He C, Sun T, Fan F, Song H, Sun D, Xu Z, Wu S, Lu D, Zhang J

pubmed logopapersMay 17 2025
White matter hyperintensity (WMH) is a primary manifestation of small vessel disease (SVD), leading to vascular cognitive impairment and other disorders. Accurate WMH quantification is vital for diagnosis and prognosis, but current automatic segmentation methods often fall short, especially across different datasets. The aims of this study are to develop and validate a robust deep learning segmentation method for WMH of vascular-origin. In this study, we developed a transformer-based method for the automatic segmentation of vascular-origin WMH using both 3D T1 and 3D T2-FLAIR images. Our initial dataset comprised 126 participants with varying WMH burdens due to SVD, each with manually segmented WMH masks used for training and testing. External validation was performed on two independent datasets: the WMH Segmentation Challenge 2017 dataset (170 subjects) and an in-house vascular risk factor dataset (70 subjects), which included scans acquired on eight different MRI systems at field strengths of 1.5T, 3T, and 5T. This approach enabled a comprehensive assessment of the method's generalizability across diverse imaging conditions. We further compared our method against LGA, LPA, BIANCA, UBO-detector and TrUE-Net in optimized settings. Our method consistently outperformed others, achieving a median Dice coefficient of 0.78±0.09 in our primary dataset, 0.72±0.15 in the external dataset 1, and 0.72±0.14 in the external dataset 2. The relative volume errors were 0.15±0.14, 0.50±0.86, and 0.47±1.02, respectively. The true positive rates were 0.81±0.13, 0.92±0.09, and 0.92±0.12, while the false positive rates were 0.20±0.09, 0.40±0.18, and 0.40±0.19. None of the external validation datasets were used for model training; instead, they comprise previously unseen MRI scans acquired from different scanners and protocols. This setup closely reflects real-world clinical scenarios and further demonstrates the robustness and generalizability of our model across diverse MRI systems and acquisition settings. As such, the proposed method provides a reliable solution for WMH segmentation in large-scale cohort studies.

Foundation versus Domain-Specific Models for Left Ventricular Segmentation on Cardiac Ultrasound

Chao, C.-J., Gu, Y., Kumar, W., Xiang, T., Appari, L., Wu, J., Farina, J. M., Wraith, R., Jeong, J., Arsanjani, R., Garvan, K. C., Oh, J. K., Langlotz, C. P., Banerjee, I., Li, F.-F., Adeli, E.

medrxiv logopreprintMay 17 2025
The Segment Anything Model (SAM) was fine-tuned on the EchoNet-Dynamic dataset and evaluated on external transthoracic echocardiography (TTE) and Point-of-Care Ultrasound (POCUS) datasets from CAMUS (University Hospital of St Etienne) and Mayo Clinic (99 patients: 58 TTE, 41 POCUS). Fine-tuned SAM was superior or comparable to MedSAM. The fine-tuned SAM also outperformed EchoNet and U-Net models, demonstrating strong generalization, especially on apical 2-chamber (A2C) images (fine-tuned SAM vs. EchoNet: CAMUS-A2C: DSC 0.891 {+/-} 0.040 vs. 0.752 {+/-} 0.196, p<0.0001) and POCUS (DSC 0.857 {+/-} 0.047 vs. 0.667 {+/-} 0.279, p<0.0001). Additionally, SAM-enhanced workflow reduced annotation time by 50% (11.6 {+/-} 4.5 sec vs. 5.7 {+/-} 1.7 sec, p<0.0001) while maintaining segmentation quality. We demonstrated an effective strategy for fine-tuning a vision foundation model for enhancing clinical workflow efficiency and supporting human-AI collaboration.

Fully Automated Evaluation of Condylar Remodeling after Orthognathic Surgery in Skeletal Class II Patients Using Deep Learning and Landmarks.

Jia W, Wu H, Mei L, Wu J, Wang M, Cui Z

pubmed logopapersMay 17 2025
Condylar remodeling is a key prognostic indicator in maxillofacial surgery for skeletal class II patients. This study aimed to develop and validate a fully automated method leveraging landmark-guided segmentation and registration for efficient assessment of condylar remodeling. A V-Net-based deep learning workflow was developed to automatically segment the mandible and localize anatomical landmarks from CT images. Cutting planes were computed based on the landmarks to segment the condylar and ramus volumes from the mandible mask. The stable ramus served as a reference for registering pre- and post-operative condyles using the Iterative Closest Point (ICP) algorithm. Condylar remodeling was subsequently assessed through mesh registration, heatmap visualization, and quantitative metrics of surface distance and volumetric change. Experts also rated the concordance between automated assessments and clinical diagnoses. In the test set, condylar segmentation achieved a Dice coefficient of 0.98, and landmark prediction yielded a mean absolute error of 0.26 mm. The automated evaluation process was completed in 5.22 seconds, approximately 150 times faster than manual assessments. The method accurately quantified condylar volume changes, ranging from 2.74% to 50.67% across patients. Expert ratings for all test cases averaged 9.62. This study introduced a consistent, accurate, and fully automated approach for condylar remodeling evaluation. The well-defined anatomical landmarks guided precise segmentation and registration, while deep learning supported an end-to-end automated workflow. The test results demonstrated its broad clinical applicability across various degrees of condylar remodeling and high concordance with expert assessments. By integrating anatomical landmarks and deep learning, the proposed method improves efficiency by 150 times without compromising accuracy, thereby facilitating an efficient and accurate assessment of orthognathic prognosis. The personalized 3D condylar remodeling models aid in visualizing sequelae, such as joint pain or skeletal relapse, and guide individualized management of TMJ disorders.

MedVKAN: Efficient Feature Extraction with Mamba and KAN for Medical Image Segmentation

Hancan Zhu, Jinhao Chen, Guanghua He

arxiv logopreprintMay 17 2025
Medical image segmentation relies heavily on convolutional neural networks (CNNs) and Transformer-based models. However, CNNs are constrained by limited receptive fields, while Transformers suffer from scalability challenges due to their quadratic computational complexity. To address these limitations, recent advances have explored alternative architectures. The state-space model Mamba offers near-linear complexity while capturing long-range dependencies, and the Kolmogorov-Arnold Network (KAN) enhances nonlinear expressiveness by replacing fixed activation functions with learnable ones. Building on these strengths, we propose MedVKAN, an efficient feature extraction model integrating Mamba and KAN. Specifically, we introduce the EFC-KAN module, which enhances KAN with convolutional operations to improve local pixel interaction. We further design the VKAN module, integrating Mamba with EFC-KAN as a replacement for Transformer modules, significantly improving feature extraction. Extensive experiments on five public medical image segmentation datasets show that MedVKAN achieves state-of-the-art performance on four datasets and ranks second on the remaining one. These results validate the potential of Mamba and KAN for medical image segmentation while introducing an innovative and computationally efficient feature extraction framework. The code is available at: https://github.com/beginner-cjh/MedVKAN.
Page 89 of 99986 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.