Sort by:
Page 124 of 1331322 results

An integrated deep learning model for early and multi-class diagnosis of Alzheimer's disease from MRI scans.

Vinukonda ER, Jagadesh BN

pubmed logopapersMay 17 2025
Alzheimer's disease (AD) is a progressive neurodegenerative disorder that severely affects memory, behavior, and cognitive function. Early and accurate diagnosis is crucial for effective intervention, yet detecting subtle changes in the early stages remains a challenge. In this study, we propose a hybrid deep learning-based multi-class classification system for AD using magnetic resonance imaging (MRI). The proposed approach integrates an improved DeepLabV3+ (IDeepLabV3+) model for lesion segmentation, followed by feature extraction using the LeNet-5 model. A novel feature selection method based on average correlation and error probability is employed to enhance classification efficiency. Finally, an Enhanced ResNext (EResNext) model is used to classify AD into four stages: non-dementia (ND), very mild dementia (VMD), mild dementia (MD), and moderate dementia (MOD). The proposed model achieves an accuracy of 98.12%, demonstrating its superior performance over existing methods. The area under the ROC curve (AUC) further validates its effectiveness, with the highest score of 0.97 for moderate dementia. This study highlights the potential of hybrid deep learning models in improving early AD detection and staging, contributing to more accurate clinical diagnosis and better patient care.

Exploring interpretable echo analysis using self-supervised parcels.

Majchrowska S, Hildeman A, Mokhtari R, Diethe T, Teare P

pubmed logopapersMay 17 2025
The application of AI for predicting critical heart failure endpoints using echocardiography is a promising avenue to improve patient care and treatment planning. However, fully supervised training of deep learning models in medical imaging requires a substantial amount of labelled data, posing significant challenges due to the need for skilled medical professionals to annotate image sequences. Our study addresses this limitation by exploring the potential of self-supervised learning, emphasising interpretability, robustness, and safety as crucial factors in cardiac imaging analysis. We leverage self-supervised learning on a large unlabelled dataset, facilitating the discovery of features applicable to a various downstream tasks. The backbone model not only generates informative features for training smaller models using simple techniques but also produces features that are interpretable by humans. The study employs a modified Self-supervised Transformer with Energy-based Graph Optimisation (STEGO) network on top of self-DIstillation with NO labels (DINO) as a backbone model, pre-trained on diverse medical and non-medical data. This approach facilitates the generation of self-segmented outputs, termed "parcels", which identify distinct anatomical sub-regions of the heart. Our findings highlight the robustness of these self-learned parcels across diverse patient profiles and phases of the cardiac cycle phases. Moreover, these parcels offer high interpretability and effectively encapsulate clinically relevant cardiac substructures. We conduct a comprehensive evaluation of the proposed self-supervised approach on publicly available datasets, demonstrating its adaptability to a wide range of requirements. Our results underscore the potential of self-supervised learning to address labelled data scarcity in medical imaging, offering a path to improve cardiac imaging analysis and enhance the efficiency and interpretability of diagnostic procedures, thus positively impacting patient care and clinical decision-making.

Enhancing Craniomaxillofacial Surgeries with Artificial Intelligence Technologies.

Do W, van Nistelrooij N, Bergé S, Vinayahalingam S

pubmed logopapersMay 16 2025
Artificial intelligence (AI) can be applied in multiple subspecialties in craniomaxillofacial (CMF) surgeries. This article overviews AI fundamentals focusing on classification, object detection, and segmentation-core tasks used in CMF applications. The article then explores the development and integration of AI in dentoalveolar surgery, implantology, traumatology, oncology, craniofacial surgery, and orthognathic and feminization surgery. It highlights AI-driven advancements in diagnosis, pre-operative planning, intra-operative assistance, post-operative management, and outcome prediction. Finally, the challenges in AI adoption are discussed, including data limitations, algorithm validation, and clinical integration.

Automated CT segmentation for lower extremity tissues in lymphedema evaluation using deep learning.

Na S, Choi SJ, Ko Y, Urooj B, Huh J, Cha S, Jung C, Cheon H, Jeon JY, Kim KW

pubmed logopapersMay 16 2025
Clinical assessment of lymphedema, particularly for lymphedema severity and fluid-fibrotic lesions, remains challenging with traditional methods. We aimed to develop and validate a deep learning segmentation tool for automated tissue component analysis in lower extremity CT scans. For development datasets, lower extremity CT venography scans were collected in 118 patients with gynecologic cancers for algorithm training. Reference standards were created by segmentation of fat, muscle, and fluid-fibrotic tissue components using 3D slicer. A deep learning model based on the Unet++ architecture with an EfficientNet-B7 encoder was developed and trained. Segmentation accuracy of the deep learning model was validated in an internal validation set (n = 10) and an external validation set (n = 10) using Dice similarity coefficient (DSC) and volumetric similarity (VS). A graphical user interface (GUI) tool was developed for the visualization of the segmentation results. Our deep learning algorithm achieved high segmentation accuracy. Mean DSCs for each component and all components ranged from 0.945 to 0.999 in the internal validation set and 0.946 to 0.999 in the external validation set. Similar performance was observed in the VS, with mean VSs for all components ranging from 0.97 to 0.999. In volumetric analysis, mean volumes of the entire leg and each component did not differ significantly between reference standard and deep learning measurements (p > 0.05). Our GUI displays lymphedema mapping, highlighting segmented fat, muscle, and fluid-fibrotic components in the entire leg. Our deep learning algorithm provides an automated segmentation tool enabling accurate segmentation, volume measurement of tissue component, and lymphedema mapping. Question Clinical assessment of lymphedema remains challenging, particularly for tissue segmentation and quantitative severity evaluation. Findings A deep learning algorithm achieved DSCs > 0.95 and VS > 0.97 for fat, muscle, and fluid-fibrotic components in internal and external validation datasets. Clinical relevance The developed deep learning tool accurately segments and quantifies lower extremity tissue components on CT scans, enabling automated lymphedema evaluation and mapping with high segmentation accuracy.

Artificial intelligence generated 3D body composition predicts dose modifications in patients undergoing neoadjuvant chemotherapy for rectal cancer.

Besson A, Cao K, Mardinli A, Wirth L, Yeung J, Kokelaar R, Gibbs P, Reid F, Yeung JM

pubmed logopapersMay 16 2025
Chemotherapy administration is a balancing act between giving enough to achieve the desired tumour response while limiting adverse effects. Chemotherapy dosing is based on body surface area (BSA). Emerging evidence suggests body composition plays a crucial role in the pharmacokinetic and pharmacodynamic profile of cytotoxic agents and could inform optimal dosing. This study aims to assess how lumbosacral body composition influences adverse events in patients receiving neoadjuvant chemotherapy for rectal cancer. A retrospective study (February 2013 to March 2023) examined the impact of body composition on neoadjuvant treatment outcomes for rectal cancer patients. Staging CT scans were analysed using a validated AI model to measure lumbosacral skeletal muscle (SM), intramuscular adipose tissue (IMAT), visceral adipose tissue (VAT), and subcutaneous adipose tissue volume and density. Multivariate analyses explored the relationship between body composition and chemotherapy outcomes. 242 patients were included (164 males, 78 Females), median age 63.4 years. Chemotherapy dose reductions occurred more frequently in females (26.9% vs. 15.9%, p = 0.042) and in females with greater VAT density (-82.7 vs. -89.1, p = 0.007) and SM: IMAT + VAT volume ratio (1.99 vs. 1.36, p = 0.042). BSA was a poor predictor of dose reduction (AUC 0.397, sensitivity 38%, specificity 60%) for female patients, whereas the SM: IMAT + VAT volume ratio (AUC 0.651, sensitivity 76%, specificity 61%) and VAT density (AUC 0.699, sensitivity 57%, specificity 74%) showed greater predictive ability. Body composition didn't influence dose adjustment of male patients. Lumbosacral body composition outperformed BSA in predicting adverse events in female patients with rectal cancer undergoing neoadjuvant chemotherapy.

Impact of sarcopenia and obesity on mortality in older adults with SARS-CoV-2 infection: automated deep learning body composition analysis in the NAPKON-SUEP cohort.

Schluessel S, Mueller B, Tausendfreund O, Rippl M, Deissler L, Martini S, Schmidmaier R, Stoecklein S, Ingrisch M, Blaschke S, Brandhorst G, Spieth P, Lehnert K, Heuschmann P, de Miranda SMN, Drey M

pubmed logopapersMay 16 2025
Severe respiratory infections pose a major challenge in clinical practice, especially in older adults. Body composition analysis could play a crucial role in risk assessment and therapeutic decision-making. This study investigates whether obesity or sarcopenia has a greater impact on mortality in patients with severe respiratory infections. The study focuses on the National Pandemic Cohort Network (NAPKON-SUEP) cohort, which includes patients over 60 years of age with confirmed severe COVID-19 pneumonia. An innovative approach was adopted, using pre-trained deep learning models for automated analysis of body composition based on routine thoracic CT scans. The study included 157 hospitalized patients (mean age 70 ± 8 years, 41% women, mortality rate 39%) from the NAPKON-SUEP cohort at 57 study sites. A pre-trained deep learning model was used to analyze body composition (muscle, bone, fat, and intramuscular fat volumes) from thoracic CT images of the NAPKON-SUEP cohort. Binary logistic regression was performed to investigate the association between obesity, sarcopenia, and mortality. Non-survivors exhibited lower muscle volume (p = 0.043), higher intramuscular fat volume (p = 0.041), and a higher BMI (p = 0.031) compared to survivors. Among all body composition parameters, muscle volume adjusted to weight was the strongest predictor of mortality in the logistic regression model, even after adjusting for factors such as sex, age, diabetes, chronic lung disease and chronic kidney disease, (odds ratio = 0.516). In contrast, BMI did not show significant differences after adjustment for comorbidities. This study identifies muscle volume derived from routine CT scans as a major predictor of survival in patients with severe respiratory infections. The results underscore the potential of AI supported CT-based body composition analysis for risk stratification and clinical decision making, not only for COVID-19 patients but also for all patients over 60 years of age with severe acute respiratory infections. The innovative application of pre-trained deep learning models opens up new possibilities for automated and standardized assessment in clinical practice.

Patient-Specific Dynamic Digital-Physical Twin for Coronary Intervention Training: An Integrated Mixed Reality Approach

Shuo Wang, Tong Ren, Nan Cheng, Rong Wang, Li Zhang

arxiv logopreprintMay 16 2025
Background and Objective: Precise preoperative planning and effective physician training for coronary interventions are increasingly important. Despite advances in medical imaging technologies, transforming static or limited dynamic imaging data into comprehensive dynamic cardiac models remains challenging. Existing training systems lack accurate simulation of cardiac physiological dynamics. This study develops a comprehensive dynamic cardiac model research framework based on 4D-CTA, integrating digital twin technology, computer vision, and physical model manufacturing to provide precise, personalized tools for interventional cardiology. Methods: Using 4D-CTA data from a 60-year-old female with three-vessel coronary stenosis, we segmented cardiac chambers and coronary arteries, constructed dynamic models, and implemented skeletal skinning weight computation to simulate vessel deformation across 20 cardiac phases. Transparent vascular physical models were manufactured using medical-grade silicone. We developed cardiac output analysis and virtual angiography systems, implemented guidewire 3D reconstruction using binocular stereo vision, and evaluated the system through angiography validation and CABG training applications. Results: Morphological consistency between virtual and real angiography reached 80.9%. Dice similarity coefficients for guidewire motion ranged from 0.741-0.812, with mean trajectory errors below 1.1 mm. The transparent model demonstrated advantages in CABG training, allowing direct visualization while simulating beating heart challenges. Conclusion: Our patient-specific digital-physical twin approach effectively reproduces both anatomical structures and dynamic characteristics of coronary vasculature, offering a dynamic environment with visual and tactile feedback valuable for education and clinical planning.

Uncertainty quantification for deep learning-based metastatic lesion segmentation on whole body PET/CT.

Schott B, Santoro-Fernandes V, Klanecek Z, Perlman S, Jeraj R

pubmed logopapersMay 16 2025
Deep learning models are increasingly being implemented for automated medical image analysis to inform patient care. Most models, however, lack uncertainty information, without which the reliability of model outputs cannot be ensured. Several uncertainty quantification (UQ) methods exist to capture model uncertainty. Yet, it is not clear which method is optimal for a given task. The purpose of this work was to investigate several commonly used UQ methods for the critical yet understudied task of metastatic lesion segmentation on whole body PET/CT. 
Approach:
59 whole body 68Ga-DOTATATE PET/CT images of patients undergoing theranostic treatment of metastatic neuroendocrine tumors were used in this work. A 3D U-Net was trained for lesion segmentation following five-fold cross validation. Uncertainty measures derived from four UQ methods-probability entropy, Monte Carlo dropout, deep ensembles, and test time augmentation-were investigated. Each uncertainty measure was assessed across four quantitative evaluations: (1) its ability to detect artificially degraded image data at low, medium, and high degradation magnitudes; (2) to detect false-positive (FP) predicted regions; (3) to recover false-negative (FN) predicted regions; and (3) to establish correlations with model biomarker extraction and segmentation performance metrics. 
Results: Test time augmentation and probability entropy respectively achieved the highest and lowest degraded image detection at low (AUC=0.54 vs. 0.68), medium (AUC=0.70 vs. 0.82), and high (AUC=0.83 vs. 0.90) degradation magnitudes. For detecting FPs, all UQ methods achieve strong performance, with AUC values ranging narrowly between 0.77 and 0.81. FN region recovery performance was strongest for test time augmentation and weakest for probability entropy. Performance for the correlation analysis was mixed, where the strongest performance was achieved by test time augmentation for SUVtotal capture (ρ=0.57) and segmentation Dice coefficient (ρ=0.72), by Monte Carlo dropout for SUVmean capture (ρ=0.35), and by probability entropy for segmentation cross entropy (ρ=0.96).
Significance: Overall, test time augmentation demonstrated superior uncertainty quantification performance and is recommended for use in metastatic lesion segmentation task. It also offers the advantage of being post hoc and computationally efficient. In contrast, probability entropy performed the worst, highlighting the need for advanced UQ approaches for this task.&#xD.

Pancreas segmentation using AI developed on the largest CT dataset with multi-institutional validation and implications for early cancer detection.

Mukherjee S, Antony A, Patnam NG, Trivedi KH, Karbhari A, Nagaraj M, Murlidhar M, Goenka AH

pubmed logopapersMay 16 2025
Accurate and fully automated pancreas segmentation is critical for advancing imaging biomarkers in early pancreatic cancer detection and for biomarker discovery in endocrine and exocrine pancreatic diseases. We developed and evaluated a deep learning (DL)-based convolutional neural network (CNN) for automated pancreas segmentation using the largest single-institution dataset to date (n = 3031 CTs). Ground truth segmentations were performed by radiologists, which were used to train a 3D nnU-Net model through five-fold cross-validation, generating an ensemble of top-performing models. To assess generalizability, the model was externally validated on the multi-institutional AbdomenCT-1K dataset (n = 585), for which volumetric segmentations were newly generated by expert radiologists and will be made publicly available. In the test subset (n = 452), the CNN achieved a mean Dice Similarity Coefficient (DSC) of 0.94 (SD 0.05), demonstrating high spatial concordance with radiologist-annotated volumes (Concordance Correlation Coefficient [CCC]: 0.95). On the AbdomenCT-1K dataset, the model achieved a DSC of 0.96 (SD 0.04) and a CCC of 0.98, confirming its robustness across diverse imaging conditions. The proposed DL model establishes new performance benchmarks for fully automated pancreas segmentation, offering a scalable and generalizable solution for large-scale imaging biomarker research and clinical translation.

GOUHFI: a novel contrast- and resolution-agnostic segmentation tool for Ultra-High Field MRI

Marc-Antoine Fortin, Anne Louise Kristoffersen, Michael Staff Larsen, Laurent Lamalle, Ruediger Stirnberg, Paal Erik Goa

arxiv logopreprintMay 16 2025
Recently, Ultra-High Field MRI (UHF-MRI) has become more available and one of the best tools to study the brain. One common step in quantitative neuroimaging is the brain segmentation. However, the differences between UHF-MRI and 1.5-3T images are such that the automatic segmentation techniques optimized at these field strengths usually produce unsatisfactory segmentation results for UHF images. It has been particularly challenging to perform quantitative analyses as typically done with 1.5-3T data, considerably limiting the potential of UHF-MRI. Hence, we propose a novel Deep Learning (DL)-based segmentation technique called GOUHFI: Generalized and Optimized segmentation tool for Ultra-High Field Images, designed to segment UHF images of various contrasts and resolutions. For training, we used a total of 206 label maps from four datasets acquired at 3T, 7T and 9.4T. In contrast to most DL strategies, we used a previously proposed domain randomization approach, where synthetic images generated from the label maps were used for training a 3D U-Net. GOUHFI was tested on seven different datasets and compared to techniques like FastSurferVINN and CEREBRUM-7T. GOUHFI was able to the segment six contrasts and seven resolutions tested at 3T, 7T and 9.4T. Average Dice-Sorensen Similarity Coefficient (DSC) scores of 0.87, 0.84, 0.91 were computed against the ground truth segmentations at 3T, 7T and 9.4T. Moreover, GOUHFI demonstrated impressive resistance to the typical inhomogeneities observed at UHF-MRI, making it a new powerful segmentation tool that allows to apply the usual quantitative analysis pipelines also at UHF. Ultimately, GOUHFI is a promising new segmentation tool, being the first of its kind proposing a contrast- and resolution-agnostic alternative for UHF-MRI, making it the forthcoming alternative for neuroscientists working with UHF-MRI or even lower field strengths.
Page 124 of 1331322 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.