Sort by:
Page 27 of 2982975 results

Multi-Center 3D CNN for Parkinson's disease diagnosis and prognosis using clinical and T1-weighted MRI data.

Basaia S, Sarasso E, Sciancalepore F, Balestrino R, Musicco S, Pisano S, Stankovic I, Tomic A, Micco R, Tessitore A, Salvi M, Meiburger KM, Kostic VS, Molinari F, Agosta F, Filippi M

pubmed logopapersAug 5 2025
Parkinson's disease (PD) presents challenges in early diagnosis and progression prediction. Recent advancements in machine learning, particularly convolutional-neural-networks (CNNs), show promise in enhancing diagnostic accuracy and prognostic capabilities using neuroimaging data. The aims of this study were: (i) develop a 3D-CNN based on MRI to distinguish controls and PD patients and (ii) employ CNN to predict the progression of PD. Three cohorts were selected: 86 mild, 62 moderate-to-severe PD patients, and 60 controls; 14 mild-PD patients and 14 controls from Parkinson's Progression Markers Initiative database, and 38 de novo mild-PD patients and 38 controls. All participants underwent MRI scans and clinical evaluation at baseline and over 2-years. PD subjects were classified in two clusters of different progression using k-means clustering based on baseline and follow-up UDPRS-III scores. A 3D-CNN was built and tested on PD patients and controls, with binary classifications: controls vs moderate-to-severe PD, controls vs mild-PD, and two clusters of PD progression. The effect of transfer learning was also tested. CNN effectively differentiated moderate-to-severe PD from controls (74% accuracy) using MRI data alone. Transfer learning significantly improved performance in distinguishing mild-PD from controls (64% accuracy). For predicting disease progression, the model achieved over 70% accuracy by combining MRI and clinical data. Brain regions most influential in the CNN's decisions were visualized. CNN, integrating multimodal data and transfer learning, provides encouraging results toward early-stage classification and progression monitoring in PD. Its explainability through activation maps offers potential for clinical application in early diagnosis and personalized monitoring.

Are Vision-xLSTM-embedded U-Nets better at segmenting medical images?

Dutta P, Bose S, Roy SK, Mitra S

pubmed logopapersAug 5 2025
The development of efficient segmentation strategies for medical images has evolved from its initial dependence on Convolutional Neural Networks (CNNs) to the current investigation of hybrid models that combine CNNs with Vision Transformers (ViTs). There is an increasing focus on developing architectures that are both high-performing and computationally efficient, capable of being deployed on remote systems with limited resources. Although transformers can capture global dependencies in the input space, they face challenges from the corresponding high computational and storage expenses involved. The objective of this research is to propose that Vision Extended Long Short-Term Memory (Vision-xLSTM) forms an appropriate backbone for medical image segmentation, offering excellent performance with reduced computational costs. This study investigates the integration of CNNs with Vision-xLSTM by introducing the novel U-VixLSTM. The Vision-xLSTM blocks capture the temporal and global relationships within the patches extracted from the CNN feature maps. The convolutional feature reconstruction path upsamples the output volume from the Vision-xLSTM blocks to produce the segmentation output. The U-VixLSTM exhibits superior performance compared to the state-of-the-art networks in the publicly available Synapse, ISIC and ACDC datasets. The findings suggest that U-VixLSTM is a promising alternative to ViTs for medical image segmentation, delivering effective performance without substantial computational burden. This makes it feasible for deployment in healthcare environments with limited resources for faster diagnosis. Code provided: https://github.com/duttapallabi2907/U-VixLSTM.

Skin lesion segmentation: A systematic review of computational techniques, tools, and future directions.

Sharma AL, Sharma K, Ghosal P

pubmed logopapersAug 5 2025
Skin lesion segmentation is a highly sought-after research topic in medical image processing, which may help in the early diagnosis of skin diseases. Early detection of skin diseases like Melanoma can decrease the mortality rate by 95%. Distinguishing lesions from healthy skin through skin image segmentation is a critical step. Various factors such as color, size, shape of the skin lesion, presence of hair, and other noise pose challenges in segmenting a lesion from healthy skin. Hence, the effectiveness of the segmentation technique utilized is vital for precise disease diagnosis and treatment planning. This review explores and summarizes the latest advancements in skin lesion segmentation techniques and their state-of-the-art methods from 2018 to 2025. It also covers crucial information, including input datasets, pre-processing, augmentation, method configuration, loss functions, hyperparameter settings, and performance metrics. The review addresses the primary challenges encountered in skin lesion segmentation from images and comprehensively compares state-of-the-art techniques for skin lesion segmentation. Researchers in this field will find this review compelling due to the insights on skin lesion segmentation and methodological details, as well as the encouraging results analysis of the state-of-the-art methods.

Controllable Mask Diffusion Model for medical annotation synthesis with semantic information extraction.

Heo C, Jung J

pubmed logopapersAug 5 2025
Medical segmentation, a prominent task in medical image analysis utilizing artificial intelligence, plays a crucial role in computer-aided diagnosis and depends heavily on the quality of the training data. However, the availability of sufficient data is constrained by strict privacy regulations associated with medical data. To mitigate this issue, research on data augmentation has gained significant attention. Medical segmentation tasks require paired datasets consisting of medical images and annotation images, also known as mask images, which represent lesion areas or radiological information within the medical images. Consequently, it is essential to apply data augmentation to both image types. This study proposes a Controllable Mask Diffusion Model, a novel approach capable of controlling and generating new masks. This model leverages the binary structure of the mask to extract semantic information, namely, the mask's size, location, and count, which is then applied as multi-conditional input to a diffusion model via a regressor. Through the regressor, newly generated masks conform to the input semantic information, thereby enabling input-driven controllable generation. Additionally, a technique that analyzes correlation within semantic information was devised for large-scale data synthesis. The generative capacity of the proposed model was evaluated against real datasets, and the model's ability to control and generate new masks based on previously unseen semantic information was confirmed. Furthermore, the practical applicability of the model was demonstrated by augmenting the data with the generated data, applying it to segmentation tasks, and comparing the performance with and without augmentation. Additionally, experiments were conducted on single-label and multi-label masks, yielding superior results for both types. This demonstrates the potential applicability of this study to various areas within the medical field.

NUTRITIONAL IMPACT OF LEUCINE-ENRICHED SUPPLEMENTS: EVALUATING PROTEIN TYPE THROUGH ARTIFICIAL INTELLIGENCE (AI)-AUGMENTED MUSCLE ULTRASONOGRAPHY IN HYPERCALORIC, HYPERPROTEIC SUPPORT.

López Gómez JJ, Gutiérrez JG, Jauregui OI, Cebriá Á, Asensio LE, Martín DP, Velasco PF, Pérez López P, Sahagún RJ, Bargues DR, Godoy EJ, de Luis Román DA

pubmed logopapersAug 5 2025
Malnutrition adversely affects physical function and body composition in patients with chronic diseases. Leucine supplementation has shown benefits in improving body composition and clinical outcomes. This study aimed to evaluate the effects of a leucine-enriched oral nutritional supplement (ONS) on the nutritional status of patients at risk of malnutrition. This prospective observational study followed two cohorts of malnourished patients receiving personalized nutritional interventions over 3 months. One group received a leucine-enriched oral supplement (20% protein, 100% whey, 3 g leucine), while other received a standard supplement (hypercaloric and normo-hyperproteic) with mixed protein sources. Nutritional status was assessed at baseline and after 3 months using anthropometry, bioelectrical impedance analysis, AI assisted muscle ultrasound, and handgrip strength RESULTS: A total of 142 patients were included (76 Leucine-ONS, 66 Standard-ONS), mostly women (65.5%), mean age 62.00 (18.66) years. Malnutrition was present in 90.1% and 34.5% had sarcopenia. Cancer was the most common condition (30.3%). The Leucine-ONS group showed greater improvements in phase angle (+2.08% vs. -1.57%; p=0.02) and rectus femoris thickness (+1.72% vs. -5.89%; p=0.03). Multivariate analysis confirmed associations between Leucine-ONS and improved phase angle (OR=2.41; 95%CI: 1.18-4.92; p=0.02) and reduced intramuscular fat (OR=2.24; 95%CI: 1.13-4.46; p=0.02). Leucine-enriched-ONS significantly improved phase angle and muscle thickness compared to standard ONS, supporting its role in enhancing body composition in malnourished patients. These results must be interpreted in the context of the observational design of the study, the heterogeneity of comparison groups and the short duration of intervention. Further randomized controlled trials are needed to confirm these results and assess long-term clinical and functional outcomes.

Altered effective connectivity in patients with drug-naïve first-episode, recurrent, and medicated major depressive disorder: a multi-site fMRI study.

Dai P, Huang K, Hu T, Chen Q, Liao S, Grecucci A, Yi X, Chen BT

pubmed logopapersAug 5 2025
Major depressive disorder (MDD) has been diagnosed through subjective and inconsistent clinical assessments. Resting-state functional magnetic resonance imaging (rs-fMRI) with connectivity analysis has been valuable for identifying neural correlates of patients with MDD, yet most studies rely on single-site and small sample sizes. This study utilized large-scale, multi-site rs-fMRI data from the Rest-meta-MDD consortium to assess effective connectivity in patients with MDD and its subtypes, i.e., drug-naïve first-episode (FEDN), recurrent (RMDD), and medicated MDD (MMDD) subtypes. To mitigate site-related variability, the ComBat algorithm was applied, and multivariate linear regression was used to control for age and gender effects. A random forest classification model was developed to identify the most predictive features. Nested five-fold cross-validation was used to assess model performance. The model effectively distinguished FEDN subtype from healthy controls (HC) group, achieving 90.13% accuracy and 96.41% AUC. However, classification performance for RMDD vs. FEDN and MMDD vs. FEDN was lower, suggesting that differences between the subtypes were less pronounced than differences between the patients with MDD and the HC group. Patients with RMDD exhibited more extensive connectivity abnormalities in the frontal-limbic system and default mode network than the patients with FEDN, implying heightened rumination. Additionally, treatment with medication appeared to partially modulate the aberrant connectivity, steering it toward normalization. This study showed altered brain connectivity in patients with MDD and its subtypes, which could be classified with machine learning models with robust performance. Abnormal connectivity could be the potential neural correlates for the presenting symptoms of patients with MDD. These findings provide novel insights into the neural pathogenesis of patients with MDD.

The REgistry of Flow and Perfusion Imaging for Artificial INtelligEnce with PET(REFINE PET): Rationale and Design.

Ramirez G, Lemley M, Shanbhag A, Kwiecinski J, Miller RJH, Kavanagh PB, Liang JX, Dey D, Slipczuk L, Travin MI, Alexanderson E, Carvajal-Juarez I, Packard RRS, Al-Mallah M, Einstein AJ, Feher A, Acampa W, Knight S, Le VT, Mason S, Sanghani R, Wopperer S, Chareonthaitawee P, Buechel RR, Rosamond TL, deKemp RA, Berman DS, Di Carli MF, Slomka PJ

pubmed logopapersAug 5 2025
The REgistry of Flow and Perfusion Imaging for Artificial Intelligence with PET (REFINE PET) was established to collect multicenter PET and associated computed tomography (CT) images, together with clinical data and outcomes, into a comprehensive research resource. REFINE PET will enable validation and development of both standard and novel cardiac PET/CT processing methods. REFINE PET is a multicenter, international registry that contains both clinical and imaging data. The PET scans were processed using QPET software (Cedars-Sinai Medical Center, Los Angeles, CA), while the CT scans were processed using deep learning (DL) to detect coronary artery calcium (CAC). Patients were followed up for the occurrence of major adverse cardiovascular events (MACE), which include death, myocardial infarction, unstable angina, and late revascularization (>90 days from PET). The REFINE PET registry currently contains data for 35,588 patients from 14 sites, with additional patient data and sites anticipated. Comprehensive clinical data (including demographics, medical history, and stress test results) were integrated with more than 2200 imaging variables across 42 categories. The registry is poised to address a broad range of clinical questions, supported by correlating invasive angiography (within 6 months of MPI) in 5972 patients and a total of 9252 major adverse cardiovascular events during a median follow-up of 4.2 years. The REFINE PET registry leverages the integration of clinical, multimodality imaging, and novel quantitative and AI tools to advance the role of PET/CT MPI in diagnosis and risk stratification.

Multi-modal MRI cascaded incremental reconstruction with coarse-to-fine spatial registration.

Wang Y, Sun Y, Liu J, Jing L, Liu Q

pubmed logopapersAug 5 2025
Magnetic resonance imaging (MRI) typically utilizes multiple contrasts to assess different tissue features, but prolonged scanning increases the risk of motion artifacts. Compressive sensing MRI (CS-MRI) employs computational reconstruction algorithm to accelerate imaging. Full-sampled auxiliary MR images can effectively assist in the reconstruction of under-sampled target MR images. However, due to spatial offset and differences in imaging parameters, how to achieve cross-modal fusion is a key issue. In order to cope with this issue, we propose an end-to-end network integrating spatial registration and cascaded incremental reconstruction for multi-modal CS-MRI. Specifically, the proposed network comprises two stages: a coarse-to-fine spatial registration sub-network and a cascaded incremental reconstruction sub-network. The registration sub-network iteratively predicts deformation flow fields between under-sampled target images and full-sampled auxiliary images, gradually aligning them to mitigate spatial offsets. The cascaded incremental reconstruction sub-network adopts a new separated criss-cross window Transformer as the basic component and deploys them in dual-path to fuse inter-modal and intra-modal features from the registered auxiliary images and under-sampled target images. Through cascade learning, we can recover incremental details from fused features and continuously refine the target images. We validate our model using the IXI brain dataset, and the experimental results demonstrate that, compared to existing methods, our network exhibits superior performance.

Beyond unimodal analysis: Multimodal ensemble learning for enhanced assessment of atherosclerotic disease progression.

Guarrasi V, Bertgren A, Näslund U, Wennberg P, Soda P, Grönlund C

pubmed logopapersAug 5 2025
Atherosclerosis is a leading cardiovascular disease typified by fatty streaks accumulating within arterial walls, culminating in potential plaque ruptures and subsequent strokes. Existing clinical risk scores, such as systematic coronary risk estimation and Framingham risk score, profile cardiovascular risks based on factors like age, cholesterol, and smoking, among others. However, these scores display limited sensitivity in early disease detection. Parallelly, ultrasound-based risk markers, such as the carotid intima media thickness, while informative, only offer limited predictive power. Notably, current models largely focus on either ultrasound image-derived risk markers or clinical risk factor data without combining both for a comprehensive, multimodal assessment. This study introduces a multimodal ensemble learning framework to assess atherosclerosis severity, especially in its early sub-clinical stage. We utilize a multi-objective optimization targeting both performance and diversity, aiming to integrate features from each modality effectively. Our objective is to measure the efficacy of models using multimodal data in assessing vascular aging, i.e., plaque presence and vascular age, over a six-year period. We also delineate a procedure for optimal model selection from a vast pool, focusing on best-suited models for classification tasks. Additionally, through eXplainable Artificial Intelligence techniques, this work delves into understanding key model contributors and discerning unique subject subgroups.

Recurrent inference machine for medical image registration.

Zhang Y, Zhao Y, Xue H, Kellman P, Klein S, Tao Q

pubmed logopapersAug 5 2025
Image registration is essential for medical image applications where alignment of voxels across multiple images is needed for qualitative or quantitative analysis. With recent advances in deep neural networks and parallel computing, deep learning-based medical image registration methods become competitive with their flexible modeling and fast inference capabilities. However, compared to traditional optimization-based registration methods, the speed advantage may come at the cost of registration performance at inference time. Besides, deep neural networks ideally demand large training datasets while optimization-based methods are training-free. To improve registration accuracy and data efficiency, we propose a novel image registration method, termed Recurrent Inference Image Registration (RIIR) network. RIIR is formulated as a meta-learning solver for the registration problem in an iterative manner. RIIR addresses the accuracy and data efficiency issues, by learning the update rule of optimization, with implicit regularization combined with explicit gradient input. We extensively evaluated RIIR on brain MRI, lung CT, and quantitative cardiac MRI datasets, in terms of both registration accuracy and training data efficiency. Our experiments showed that RIIR outperformed a range of deep learning-based methods, even with only 5% of the training data, demonstrating high data efficiency. Key findings from our ablation studies highlighted the important added value of the hidden states introduced in the recurrent inference framework for meta-learning. Our proposed RIIR offers a highly data-efficient framework for deep learning-based medical image registration.
Page 27 of 2982975 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.