Sort by:
Page 25 of 33324 results

An integrated deep learning model for early and multi-class diagnosis of Alzheimer's disease from MRI scans.

Vinukonda ER, Jagadesh BN

pubmed logopapersMay 17 2025
Alzheimer's disease (AD) is a progressive neurodegenerative disorder that severely affects memory, behavior, and cognitive function. Early and accurate diagnosis is crucial for effective intervention, yet detecting subtle changes in the early stages remains a challenge. In this study, we propose a hybrid deep learning-based multi-class classification system for AD using magnetic resonance imaging (MRI). The proposed approach integrates an improved DeepLabV3+ (IDeepLabV3+) model for lesion segmentation, followed by feature extraction using the LeNet-5 model. A novel feature selection method based on average correlation and error probability is employed to enhance classification efficiency. Finally, an Enhanced ResNext (EResNext) model is used to classify AD into four stages: non-dementia (ND), very mild dementia (VMD), mild dementia (MD), and moderate dementia (MOD). The proposed model achieves an accuracy of 98.12%, demonstrating its superior performance over existing methods. The area under the ROC curve (AUC) further validates its effectiveness, with the highest score of 0.97 for moderate dementia. This study highlights the potential of hybrid deep learning models in improving early AD detection and staging, contributing to more accurate clinical diagnosis and better patient care.

Technology Advances in the placement of naso-enteral tubes and in the management of enteral feeding in critically ill patients: a narrative study.

Singer P, Setton E

pubmed logopapersMay 16 2025
Enteral feeding needs secure access to the upper gastrointestinal tract, an evaluation of the gastric function to detect gastrointestinal intolerance, and a nutritional target to reach the patient's needs. Only in the last decades has progress been accomplished in techniques allowing an appropriate placement of the nasogastric tube, mainly reducing pulmonary complications. These techniques include point-of-care ultrasound (POCUS), electromagnetic sensors, real-time video-assisted placement, impedance sensors, and virtual reality. Again, POCUS is the most accessible tool available to evaluate gastric emptying, with antrum echo density measurement. Automatic measurements of gastric antrum content supported by deep learning algorithms and electric impedance provide gastric volume. Intragastric balloons can evaluate motility. Finally, advanced technologies have been tested to improve nutritional intake: Stimulation of the esophagus mucosa inducing contraction mimicking a contraction wave that may improve enteral nutrition efficacy, impedance sensors to detect gastric reflux and modulate the rate of feeding accordingly have been clinically evaluated. Use of electronic health records integrating nutritional needs, target, and administration is recommended.

Pretrained hybrid transformer for generalizable cardiac substructures segmentation from contrast and non-contrast CTs in lung and breast cancers

Aneesh Rangnekar, Nikhil Mankuzhy, Jonas Willmann, Chloe Choi, Abraham Wu, Maria Thor, Andreas Rimner, Harini Veeraraghavan

arxiv logopreprintMay 16 2025
AI automated segmentations for radiation treatment planning (RTP) can deteriorate when applied in clinical cases with different characteristics than training dataset. Hence, we refined a pretrained transformer into a hybrid transformer convolutional network (HTN) to segment cardiac substructures lung and breast cancer patients acquired with varying imaging contrasts and patient scan positions. Cohort I, consisting of 56 contrast-enhanced (CECT) and 124 non-contrast CT (NCCT) scans from patients with non-small cell lung cancers acquired in supine position, was used to create oracle with all 180 training cases and balanced (CECT: 32, NCCT: 32 training) HTN models. Models were evaluated on a held-out validation set of 60 cohort I patients and 66 patients with breast cancer from cohort II acquired in supine (n=45) and prone (n=21) positions. Accuracy was measured using DSC, HD95, and dose metrics. Publicly available TotalSegmentator served as the benchmark. The oracle and balanced models were similarly accurate (DSC Cohort I: 0.80 \pm 0.10 versus 0.81 \pm 0.10; Cohort II: 0.77 \pm 0.13 versus 0.80 \pm 0.12), outperforming TotalSegmentator. The balanced model, using half the training cases as oracle, produced similar dose metrics as manual delineations for all cardiac substructures. This model was robust to CT contrast in 6 out of 8 substructures and patient scan position variations in 5 out of 8 substructures and showed low correlations of accuracy to patient size and age. A HTN demonstrated robustly accurate (geometric and dose metrics) cardiac substructures segmentation from CTs with varying imaging and patient characteristics, one key requirement for clinical use. Moreover, the model combining pretraining with balanced distribution of NCCT and CECT scans was able to provide reliably accurate segmentations under varied conditions with far fewer labeled datasets compared to an oracle model.

Patient-Specific Dynamic Digital-Physical Twin for Coronary Intervention Training: An Integrated Mixed Reality Approach

Shuo Wang, Tong Ren, Nan Cheng, Rong Wang, Li Zhang

arxiv logopreprintMay 16 2025
Background and Objective: Precise preoperative planning and effective physician training for coronary interventions are increasingly important. Despite advances in medical imaging technologies, transforming static or limited dynamic imaging data into comprehensive dynamic cardiac models remains challenging. Existing training systems lack accurate simulation of cardiac physiological dynamics. This study develops a comprehensive dynamic cardiac model research framework based on 4D-CTA, integrating digital twin technology, computer vision, and physical model manufacturing to provide precise, personalized tools for interventional cardiology. Methods: Using 4D-CTA data from a 60-year-old female with three-vessel coronary stenosis, we segmented cardiac chambers and coronary arteries, constructed dynamic models, and implemented skeletal skinning weight computation to simulate vessel deformation across 20 cardiac phases. Transparent vascular physical models were manufactured using medical-grade silicone. We developed cardiac output analysis and virtual angiography systems, implemented guidewire 3D reconstruction using binocular stereo vision, and evaluated the system through angiography validation and CABG training applications. Results: Morphological consistency between virtual and real angiography reached 80.9%. Dice similarity coefficients for guidewire motion ranged from 0.741-0.812, with mean trajectory errors below 1.1 mm. The transparent model demonstrated advantages in CABG training, allowing direct visualization while simulating beating heart challenges. Conclusion: Our patient-specific digital-physical twin approach effectively reproduces both anatomical structures and dynamic characteristics of coronary vasculature, offering a dynamic environment with visual and tactile feedback valuable for education and clinical planning.

GOUHFI: a novel contrast- and resolution-agnostic segmentation tool for Ultra-High Field MRI

Marc-Antoine Fortin, Anne Louise Kristoffersen, Michael Staff Larsen, Laurent Lamalle, Ruediger Stirnberg, Paal Erik Goa

arxiv logopreprintMay 16 2025
Recently, Ultra-High Field MRI (UHF-MRI) has become more available and one of the best tools to study the brain. One common step in quantitative neuroimaging is the brain segmentation. However, the differences between UHF-MRI and 1.5-3T images are such that the automatic segmentation techniques optimized at these field strengths usually produce unsatisfactory segmentation results for UHF images. It has been particularly challenging to perform quantitative analyses as typically done with 1.5-3T data, considerably limiting the potential of UHF-MRI. Hence, we propose a novel Deep Learning (DL)-based segmentation technique called GOUHFI: Generalized and Optimized segmentation tool for Ultra-High Field Images, designed to segment UHF images of various contrasts and resolutions. For training, we used a total of 206 label maps from four datasets acquired at 3T, 7T and 9.4T. In contrast to most DL strategies, we used a previously proposed domain randomization approach, where synthetic images generated from the label maps were used for training a 3D U-Net. GOUHFI was tested on seven different datasets and compared to techniques like FastSurferVINN and CEREBRUM-7T. GOUHFI was able to the segment six contrasts and seven resolutions tested at 3T, 7T and 9.4T. Average Dice-Sorensen Similarity Coefficient (DSC) scores of 0.87, 0.84, 0.91 were computed against the ground truth segmentations at 3T, 7T and 9.4T. Moreover, GOUHFI demonstrated impressive resistance to the typical inhomogeneities observed at UHF-MRI, making it a new powerful segmentation tool that allows to apply the usual quantitative analysis pipelines also at UHF. Ultimately, GOUHFI is a promising new segmentation tool, being the first of its kind proposing a contrast- and resolution-agnostic alternative for UHF-MRI, making it the forthcoming alternative for neuroscientists working with UHF-MRI or even lower field strengths.

Enhancing Craniomaxillofacial Surgeries with Artificial Intelligence Technologies.

Do W, van Nistelrooij N, Bergé S, Vinayahalingam S

pubmed logopapersMay 16 2025
Artificial intelligence (AI) can be applied in multiple subspecialties in craniomaxillofacial (CMF) surgeries. This article overviews AI fundamentals focusing on classification, object detection, and segmentation-core tasks used in CMF applications. The article then explores the development and integration of AI in dentoalveolar surgery, implantology, traumatology, oncology, craniofacial surgery, and orthognathic and feminization surgery. It highlights AI-driven advancements in diagnosis, pre-operative planning, intra-operative assistance, post-operative management, and outcome prediction. Finally, the challenges in AI adoption are discussed, including data limitations, algorithm validation, and clinical integration.

Fluid fluctuations assessed with artificial intelligence during the maintenance phase impact anti-vascular endothelial growth factor visual outcomes in a multicentre, routine clinical care national age-related macular degeneration database.

Martin-Pinardel R, Izquierdo-Serra J, Bernal-Morales C, De Zanet S, Garay-Aramburu G, Puzo M, Arruabarrena C, Sararols L, Abraldes M, Broc L, Escobar-Barranco JJ, Figueroa M, Zapata MA, Ruiz-Moreno JM, Parrado-Carrillo A, Moll-Udina A, Alforja S, Figueras-Roca M, Gómez-Baldó L, Ciller C, Apostolopoulos S, Mishchuk A, Casaroli-Marano RP, Zarranz-Ventura J

pubmed logopapersMay 16 2025
To evaluate the impact of fluid volume fluctuations quantified with artificial intelligence in optical coherence tomography scans during the maintenance phase and visual outcomes at 12 and 24 months in a real-world, multicentre, national cohort of treatment-naïve neovascular age-related macular degeneration (nAMD) eyes. Demographics, visual acuity (VA) and number of injections were collected using the Fight Retinal Blindness tool. Intraretinal fluid (IRF), subretinal fluid (SRF), pigment epithelial detachment (PED), total fluid (TF) and central subfield thickness (CST) were quantified using the RetinAI Discovery tool. Fluctuations were defined as the SD of within-eye quantified values, and eyes were distributed according to SD quartiles for each biomarker. A total of 452 naïve nAMD eyes were included. Eyes with highest (Q4) versus lowest (Q1) fluid fluctuations showed significantly worse VA change (months 3-12) in IRF -3.91 versus 3.50 letters, PED -4.66 versus 3.29, TF -2.07 versus 2.97 and CST -1.85 versus 2.96 (all p<0.05), but not for SRF 0.66 versus 0.93 (p=0.91). Similar VA outcomes were observed at month 24 for PED -8.41 versus 4.98 (p<0.05), TF -7.38 versus 1.89 (p=0.07) and CST -10.58 versus 3.60 (p<0.05). The median number of injections (months 3-24) was significantly higher in Q4 versus Q1 eyes in IRF 9 versus 8, SRF 10 versus 8 and TF 10 versus 8 (all p<0.05). This multicentre study reports a negative effect in VA outcomes of fluid volume fluctuations during the maintenance phase in specific fluid compartments, suggesting that anatomical and functional treatment response patterns may be fluid-specific.

Uncertainty quantification for deep learning-based metastatic lesion segmentation on whole body PET/CT.

Schott B, Santoro-Fernandes V, Klanecek Z, Perlman S, Jeraj R

pubmed logopapersMay 16 2025
Deep learning models are increasingly being implemented for automated medical image analysis to inform patient care. Most models, however, lack uncertainty information, without which the reliability of model outputs cannot be ensured. Several uncertainty quantification (UQ) methods exist to capture model uncertainty. Yet, it is not clear which method is optimal for a given task. The purpose of this work was to investigate several commonly used UQ methods for the critical yet understudied task of metastatic lesion segmentation on whole body PET/CT. &#xD;Approach:&#xD;59 whole body 68Ga-DOTATATE PET/CT images of patients undergoing theranostic treatment of metastatic neuroendocrine tumors were used in this work. A 3D U-Net was trained for lesion segmentation following five-fold cross validation. Uncertainty measures derived from four UQ methods-probability entropy, Monte Carlo dropout, deep ensembles, and test time augmentation-were investigated. Each uncertainty measure was assessed across four quantitative evaluations: (1) its ability to detect artificially degraded image data at low, medium, and high degradation magnitudes; (2) to detect false-positive (FP) predicted regions; (3) to recover false-negative (FN) predicted regions; and (3) to establish correlations with model biomarker extraction and segmentation performance metrics. &#xD;Results: Test time augmentation and probability entropy respectively achieved the highest and lowest degraded image detection at low (AUC=0.54 vs. 0.68), medium (AUC=0.70 vs. 0.82), and high (AUC=0.83 vs. 0.90) degradation magnitudes. For detecting FPs, all UQ methods achieve strong performance, with AUC values ranging narrowly between 0.77 and 0.81. FN region recovery performance was strongest for test time augmentation and weakest for probability entropy. Performance for the correlation analysis was mixed, where the strongest performance was achieved by test time augmentation for SUVtotal capture (ρ=0.57) and segmentation Dice coefficient (ρ=0.72), by Monte Carlo dropout for SUVmean capture (ρ=0.35), and by probability entropy for segmentation cross entropy (ρ=0.96).&#xD;Significance: Overall, test time augmentation demonstrated superior uncertainty quantification performance and is recommended for use in metastatic lesion segmentation task. It also offers the advantage of being post hoc and computationally efficient. In contrast, probability entropy performed the worst, highlighting the need for advanced UQ approaches for this task.&#xD.

Pancreas segmentation using AI developed on the largest CT dataset with multi-institutional validation and implications for early cancer detection.

Mukherjee S, Antony A, Patnam NG, Trivedi KH, Karbhari A, Nagaraj M, Murlidhar M, Goenka AH

pubmed logopapersMay 16 2025
Accurate and fully automated pancreas segmentation is critical for advancing imaging biomarkers in early pancreatic cancer detection and for biomarker discovery in endocrine and exocrine pancreatic diseases. We developed and evaluated a deep learning (DL)-based convolutional neural network (CNN) for automated pancreas segmentation using the largest single-institution dataset to date (n = 3031 CTs). Ground truth segmentations were performed by radiologists, which were used to train a 3D nnU-Net model through five-fold cross-validation, generating an ensemble of top-performing models. To assess generalizability, the model was externally validated on the multi-institutional AbdomenCT-1K dataset (n = 585), for which volumetric segmentations were newly generated by expert radiologists and will be made publicly available. In the test subset (n = 452), the CNN achieved a mean Dice Similarity Coefficient (DSC) of 0.94 (SD 0.05), demonstrating high spatial concordance with radiologist-annotated volumes (Concordance Correlation Coefficient [CCC]: 0.95). On the AbdomenCT-1K dataset, the model achieved a DSC of 0.96 (SD 0.04) and a CCC of 0.98, confirming its robustness across diverse imaging conditions. The proposed DL model establishes new performance benchmarks for fully automated pancreas segmentation, offering a scalable and generalizable solution for large-scale imaging biomarker research and clinical translation.

Artificial intelligence generated 3D body composition predicts dose modifications in patients undergoing neoadjuvant chemotherapy for rectal cancer.

Besson A, Cao K, Mardinli A, Wirth L, Yeung J, Kokelaar R, Gibbs P, Reid F, Yeung JM

pubmed logopapersMay 16 2025
Chemotherapy administration is a balancing act between giving enough to achieve the desired tumour response while limiting adverse effects. Chemotherapy dosing is based on body surface area (BSA). Emerging evidence suggests body composition plays a crucial role in the pharmacokinetic and pharmacodynamic profile of cytotoxic agents and could inform optimal dosing. This study aims to assess how lumbosacral body composition influences adverse events in patients receiving neoadjuvant chemotherapy for rectal cancer. A retrospective study (February 2013 to March 2023) examined the impact of body composition on neoadjuvant treatment outcomes for rectal cancer patients. Staging CT scans were analysed using a validated AI model to measure lumbosacral skeletal muscle (SM), intramuscular adipose tissue (IMAT), visceral adipose tissue (VAT), and subcutaneous adipose tissue volume and density. Multivariate analyses explored the relationship between body composition and chemotherapy outcomes. 242 patients were included (164 males, 78 Females), median age 63.4 years. Chemotherapy dose reductions occurred more frequently in females (26.9% vs. 15.9%, p = 0.042) and in females with greater VAT density (-82.7 vs. -89.1, p = 0.007) and SM: IMAT + VAT volume ratio (1.99 vs. 1.36, p = 0.042). BSA was a poor predictor of dose reduction (AUC 0.397, sensitivity 38%, specificity 60%) for female patients, whereas the SM: IMAT + VAT volume ratio (AUC 0.651, sensitivity 76%, specificity 61%) and VAT density (AUC 0.699, sensitivity 57%, specificity 74%) showed greater predictive ability. Body composition didn't influence dose adjustment of male patients. Lumbosacral body composition outperformed BSA in predicting adverse events in female patients with rectal cancer undergoing neoadjuvant chemotherapy.
Page 25 of 33324 results
Show
per page
Get Started

Upload your X-ray image and get interpretation.

Upload now →

Disclaimer: X-ray Interpreter's AI-generated results are for informational purposes only and not a substitute for professional medical advice. Always consult a healthcare professional for medical diagnosis and treatment.