Sort by:
Page 556 of 7577568 results

Huang J, Shen N, Tan Y, Tang Y, Ding Z

pubmed logopapersJun 27 2025
Diagnosis of hydrocephalus involves a careful check of the patient's history and thorough neurological assessment. The traditional diagnosis has predominantly depended on the professional judgment of physicians based on clinical experience, but with the advancement of precision medicine and individualized treatment, such experience-based methods are no longer sufficient to keep pace with current clinical requirements. To fit this adjustment, the medical community actively devotes itself to data-driven intelligent diagnostic solutions. Building a prognosis prediction model for hydrocephalus has thus become a new focus, among which intelligent prediction systems supported by deep learning offer new technical advantages for clinical diagnosis and treatment decisions. Over the past several years, algorithms of deep learning have demonstrated conspicuous advantages in medical image analysis. Studies revealed that the accuracy rate of the diagnosis of hydrocephalus by magnetic resonance imaging can reach 90% through convolutional neural networks, while their sensitivity and specificity are also better than these of traditional methods. With the extensive use of medical technology in terms of deep learning, its successful use in modeling hydrocephalus prognosis has also drawn extensive attention and recognition from scholars. This review explores the application of deep learning in hydrocephalus diagnosis and prognosis, focusing on image-based, biochemical, and structured data models. Highlighting recent advancements, challenges, and future trajectories, the study emphasizes deep learning's potential to enhance personalized treatment and improve outcomes.

Chen HJ, Qiu J, Qi Y, Guo Y, Zhang Z, Qin H, Wu F, Chen F

pubmed logopapersJun 27 2025
Magnetic resonance imaging (MRI) has shown that patients with end-stage renal disease have decreased gray matter volume and density. However, the cortical area and thickness in patients on hemodialysis are uncertain, and the relationship between patients' cognition and cortical alterations remains unclear. Thirty-six hemodialysis patients and 25 age- and sex-matched healthy controls were enrolled in this study and underwent brain MRI scans and neuropsychological assessments. According to the Desikan-Killiany atlas, the brain is divided into 68 regions. Using FreeSurfer software, we analyzed the differences in cortical area and thickness of each region between groups. Machine learning-based classification was also used to differentiate hemodialysis patients from healthy individuals. The patients exhibited decreased cortical thickness in the frontal and temporal regions, including the left bankssts, left lingual gyrus, left pars triangularis, bilateral superior temporal gyrus, and right pars opercularis and decreased cortical area in the left rostral middle frontal gyrus, left superior frontal gyrus, right fusiform gyrus, right pars orbitalis and right superior frontal gyrus. Decreased cortical thickness was positively associated with poorer scores on the neuropsychological tests and increased uric acid and urea levels. Cortical thickness pattern allowed differentiating the patients from the controls with 96.7% accuracy (97.5% sensitivity, 95.0% specificity, 97.5% precision, and AUC: 0.983) on the support vector machine analysis. Patients on hemodialysis exhibited decreased cortical area and thickness, which was associated with poorer cognition and uremic toxins.

Li DL, Zhu L, Liu SL, Wang ZB, Liu JN, Zhou XM, Hu JL, Liu RQ

pubmed logopapersJun 27 2025
Early identification of bowel resection risks is crucial for patients with incarcerated inguinal hernia (IIH). However, the prompt detection of these risks remains a significant challenge. Advancements in radiomic feature extraction and machine learning algorithms have paved the way for innovative diagnostic approaches to assess IIH more effectively. To devise a sophisticated radiomic-clinical model to evaluate bowel resection risks in IIH patients, thereby enhancing clinical decision-making processes. This single-center retrospective study analyzed 214 IIH patients randomized into training (<i>n</i> = 161) and test (<i>n</i> = 53) sets (3:1). Radiologists segmented hernia sac-trapped bowel volumes of interest (VOIs) on computed tomography images. Radiomic features extracted from VOIs generated Rad-scores, which were combined with clinical data to construct a nomogram. The nomogram's performance was evaluated against standalone clinical and radiomic models in both cohorts. A total of 1561 radiomic features were extracted from the VOIs. After dimensionality reduction, 13 radiomic features were used with eight machine learning algorithms to develop the radiomic model. The logistic regression algorithm was ultimately selected for its effectiveness, showing an area under the curve (AUC) of 0.828 [95% confidence interval (CI): 0.753-0.902] in the training set and 0.791 (95%CI: 0.668-0.915) in the test set. The comprehensive nomogram, incorporating clinical indicators showcased strong predictive capabilities for assessing bowel resection risks in IIH patients, with AUCs of 0.864 (95%CI: 0.800-0.929) and 0.800 (95%CI: 0.669-0.931) for the training and test sets, respectively. Decision curve analysis revealed the integrated model's superior performance over standalone clinical and radiomic approaches. This innovative radiomic-clinical nomogram has proven to be effective in predicting bowel resection risks in IIH patients and has substantially aided clinical decision-making.

Kirschen MP, Li J, Elmer J, Manteghinejad A, Arefan D, Graham K, Morgan RW, Nadkarni V, Diaz-Arrastia R, Berg R, Topjian A, Vossough A, Wu S

pubmed logopapersJun 27 2025
To train deep learning models to detect hypoxic-ischemic brain injury (HIBI) on early CT scans after pediatric out-of-hospital cardiac arrest (OHCA) and determine if models could identify HIBI that was not visually appreciable to a radiologist. Retrospective study of children who had a CT scan within 24 hours of OHCA compared to age-matched controls. We designed models to detect HIBI by discriminating CT images from OHCA cases and controls, and predict death and unfavorable outcome (PCPC 4-6 at hospital discharge) among cases. Model performance was measured by AUC. We trained a second model to distinguish OHCA cases with radiologist-identified HIBI from controls without OHCA and tested the model on OHCA cases without radiologist-identified HIBI. We compared outcomes between OHCA cases with and without model-categorized HIBI. We analyzed 117 OHCA cases (age 3.1 [0.7-12.2] years); 43% died and 58% had unfavorable outcome. Median time from arrest to CT was 2.1 [1.0,7.2] hours. Deep learning models discriminated OHCA cases from controls with a mean AUC of 0.87±0.05. Among OHCA cases, mean AUCs for predicting death and unfavorable outcome were 0.79±0.06 and 0.69±0.06, respectively. Mean AUC was 0.98±0.01for discriminating between 44 OHCA cases with radiologist-identified HIBI and controls. Among 73 OHCA cases without radiologist-identified HIBI, the model identified 36% as having presumed HIBI; 31% of whom died compared to 17% of cases without HIBI identified radiologically and via the model (p=0.174). Deep learning models can identify HIBI on early CT images after pediatric OHCA and detect some presumed HIBI visually not identified by a radiologist.

Wei M, He S, Meng D, Lv Z, Guo H, Yang G, Wang Z

pubmed logopapersJun 27 2025
Resistance exercise, Taichi exercise, and the hybrid exercise program consisting of the two aforementioned methods have been demonstrated to increase the skeletal muscle mass of older individuals with sarcopenia. However, the exercise sequence has not been comprehensively investigated. Therefore, we designed a self-determined sequence exercise program, incorporating resistance exercises, Taichi, and the hybrid exercise program to overcome the decline of skeletal muscle area and reverse sarcopenia in older individuals. Ninety-one older patients with sarcopenia between the ages of 60 and 75 completed this three-stage randomized controlled trial for 24 weeks, including the self-determined sequence exercise program group (n = 31), the resistance training group (n = 30), and the control group (n = 30). We used quantitative computed tomography to measure the effects of different intervention protocols on skeletal muscle mass in participants. Participants' demographic variables were analyzed using one-way analysis of variance and chi-square tests, and experimental data were examined using repeated-measures analysis of variance. Furthermore, we utilized the Markov model to explain the effectiveness of the exercise programs among the three-stage intervention and explainable artificial intelligence to predict whether intervention programs can reverse sarcopenia. Repeated-measures analysis of variance results indicated that there were statistically significant Group × Time interactions detected in the L3 skeletal muscle density, L3 skeletal muscle area, muscle fat infiltration, handgrip strength, and relative skeletal muscle mass index. The stacking model exhibited the best accuracy (84.5%) and the best F1-score (68.8%) compared to other algorithms. In the self-determined sequence exercise program group, strength training contributed most to the reversal of sarcopenia. One self-determined sequence exercise program can improve skeletal muscle area among sarcopenic older people. Based on our stacking model, we can predict whether sarcopenia in older people can be reversed accurately. The trial was registered in ClinicalTrials.gov. TRN:NCT05694117. Our findings indicate that such tailored exercise interventions can substantially benefit sarcopenic patients, and our stacking model provides an accurate predictive tool for assessing the reversibility of sarcopenia in older adults. This approach not only enhances individual health outcomes but also informs future development of targeted exercise programs to mitigate age-related muscle decline.

Cavicchioli M, Moglia A, Garret G, Puglia M, Vacavant A, Pugliese G, Cerveri P

pubmed logopapersJun 27 2025
Accurate segmentation of hepatic and portal veins is critical for preoperative planning in liver surgery, especially for resection and transplantation procedures. Extensive anatomical variability, pathological alterations, and inherent class imbalance between background and vascular structures challenge this task. Current state-of-the-art deep learning approaches often fail to generalize across patient variability or maintain vascular topology, thus limiting their clinical applicability. To overcome these limitations, we propose the D<sup>2</sup>-RD-UNet, a dual-stage, dual-class segmentation framework for hepatic and portal vessels. The D<sup>2</sup>-RD-UNet architecture employs dense and residual connections to improve feature propagation and segmentation accuracy. Our D<sup>2</sup>-RD-UNet integrates advanced data-driven preprocessing, a dual-path architecture for 3D and 4D data, with the latter concatenating computed tomography (CT) scans with four relevant vesselness filters (Sato, Frangi, OOF, and RORPO). The pipeline is completed by the first developed postprocessing multi-class vessel connectivity correction algorithm based on centerlines. Additionally, we introduce the first radius-based branching algorithm to evaluate the model's predictions locally, providing detailed insights into the accuracy of vascular reconstructions at different scales. In order to make up for the scarcity of well-annotated open datasets for hepatic vessels segmentation, we curated AIMS-HPV-385, a large, pathological, multi-class, and validated dataset on 385 CT scans. We trained different configurations of D<sup>2</sup>-RD-UNet and state-of-the-art models on 327 CTs of AIMS-HPV-385. Experimental results on the remaining 58 CTs of AIMS-HPV-385 and on the 20 CTs of 3D-IRCADb-01 demonstrate superior performances of the D<sup>2</sup>-RD-UNet variants over state-of-the-art methods, achieving robust generalization, preserving vascular continuity, and offering a reliable approach for liver vascular reconstructions.

Han X, Li W, Zhang Y, Li P, Zhu J, Zhang T, Wang R, Gao Y

pubmed logopapersJun 27 2025
The clinical diagnosis of clear cell renal cell carcinoma (ccRCC) primarily depends on histopathological analysis and computed tomography (CT). Although pathological diagnosis is regarded as the gold standard, invasive procedures such as biopsy carry the risk of tumor dissemination. Conversely, CT scanning offers a non-invasive alternative, but its resolution may be inadequate for detecting microscopic tumor features, which limits the performance of prognostic assessments. To address this issue, we propose a high-order correlation-driven method for predicting the survival of ccRCC using only CT images, achieving performance comparable to that of the pathological gold standard. The proposed method utilizes a cross-modal hypergraph neural network based on hypergraph transfer learning to perform high-order correlation modeling and semantic feature extraction from whole-slide pathological images and CT images. By employing multi-kernel maximum mean discrepancy, we transfer the high-order semantic features learned from pathological images to the CT-based hypergraph neural network channel. During the testing phase, high-precision survival predictions were achieved using only CT images, eliminating the need for pathological images. This approach not only reduces the risks associated with invasive examinations for patients but also significantly enhances clinical diagnostic efficiency. The proposed method was validated using four datasets: three collected from different hospitals and one from the public TCGA dataset. Experimental results indicate that the proposed method achieves higher concordance indices across all datasets compared to other methods.

Ye Z, Wang K, Lv W, Feng Q, Lu L

pubmed logopapersJun 27 2025
Deep learning-based medical image segmentation faces significant challenges arising from limited labeled data and domain shifts. While prior approaches have primarily addressed these issues independently, their simultaneous occurrence is common in medical imaging. A method that generalizes to unseen domains using only minimal annotations offers significant practical value due to reduced data annotation and development costs. In pursuit of this goal, we propose FSDA-DG, a novel solution to improve cross-domain generalizability of medical image segmentation with few single-source domain annotations. Specifically, our approach introduces semantics-guided semi-supervised data augmentation. This method divides images into global broad regions and semantics-guided local regions, and applies distinct augmentation strategies to enrich data distribution. Within this framework, both labeled and unlabeled data are transformed into extensive domain knowledge while preserving domain-invariant semantic information. Additionally, FSDA-DG employs a multi-decoder U-Net pipeline semi-supervised learning (SSL) network to improve domain-invariant representation learning through consistent prior assumption across multiple perturbations. By integrating data-level and model-level designs, FSDA-DG achieves superior performance compared to state-of-the-art methods in two challenging single domain generalization (SDG) tasks with limited annotations. The code is publicly available at https://github.com/yezanting/FSDA-DG.

Uus, A., Avena Zampieri, C., Downes, F., Egloff Collado, A., Hall, M., Davidson, J., Payette, K., Aviles Verdera, J., Grigorescu, I., Hajnal, J. V., Deprez, M., Aertsen, M., Hutter, J., Rutherford, M., Deprest, J., Story, L.

medrxiv logopreprintJun 26 2025
Fetal MRI is increasingly being employed in the diagnosis of fetal lung anomalies and segmentation-derived total fetal lung volumes are used as one of the parameters for prediction of neonatal outcomes. However, in clinical practice, segmentation is performed manually in 2D motion-corrupted stacks with thick slices which is time consuming and can lead to variations in estimated volumes. Furthermore, there is a known lack of consensus regarding a universal lung parcellation protocol and expected normal total lung volume formulas. The lungs are also segmented as one label without parcellation into lobes. In terms of automation, to the best of our knowledge, there have been no reported works on multi-lobe segmentation for fetal lung MRI. This work introduces the first automated deep learning segmentation pipeline for multi-regional lung segmentation for 3D motion-corrected T2w fetal body images for normal anatomy and congenital diaphragmatic hernia cases. The protocol for parcellation into 5 standard lobes was defined in the population-averaged 3D atlas. It was then used to generate a multi-label training dataset including 104 normal anatomy controls and 45 congenital diaphragmatic hernia cases from 0.55T, 1.5T and 3T acquisition protocols. The performance of 3D Attention UNet network was evaluated on 18 cases and showed good results for normal lung anatomy with expectedly lower Dice values for the ipsilateral lung. In addition, we also produced normal lung volumetry growth charts from 290 0.55T and 3T controls. This is the first step towards automated multi-regional fetal lung analysis for 3D fetal MRI.

Shenoy, R., Samra, G. S., Sekhri, R., Yoon, H.-J., Teli, S., DeSilva, I., Tu, Z., Maconachie, G. D., Thomas, M. G.

medrxiv logopreprintJun 26 2025
ImportanceDifferentiating pseudopapilloedema from papilloedema is challenging, but critical for prompt diagnosis and to avoid unnecessary invasive procedures. Following diagnosis of papilloedema, objectively grading severity is important for determining urgency of management and therapeutic response. Automated machine learning (AutoML) has emerged as a promising tool for diagnosis in medical imaging and may provide accessible opportunities for consistent and accurate diagnosis and severity grading of papilloedema. ObjectiveThis study evaluates the feasibility of AutoML models for distinguishing the presence and severity of papilloedema using near infrared reflectance images (NIR) obtained from standard optical coherence tomography (OCT), comparing the performance of different AutoML platforms. Design, setting and participantsA retrospective cohort study was conducted using data from University Hospitals of Leicester, NHS Trust. The study involved 289 adults and children patients (813 images) who underwent optic nerve head-centred OCT imaging between 2021 and 2024. The dataset included patients with normal optic discs (69 patients, 185 images), papilloedema (135 patients, 372 images), and optic disc drusen (ODD) (85 patients, 256 images). AutoML platforms - Amazon Rekognition, Medic Mind (MM) and Google Vertex were evaluated for their ability to classify and grade papilloedema severity. Main outcomes and measuresTwo classification tasks were performed: (1) distinguishing papilloedema from normal discs and ODD; (2) grading papilloedema severity (mild/moderate vs. severe). Model performance was evaluated using area under the curve (AUC), precision, recall, F1 score, and confusion matrices for all six models. ResultsAmazon Rekognition outperformed the other platforms, achieving the highest AUC (0.90) and F1 score (0.81) in distinguishing papilloedema from normal/ODD. For papilloedema severity grading, Amazon Rekognition also performed best, with an AUC of 0.90 and F1 score of 0.79. Google Vertex and Medic Mind demonstrated good performance but had slightly lower accuracy and higher misclassification rates. Conclusions and relevanceThis evaluation of three widely available AutoML platforms using NIR images obtained from standard OCT shows promise in distinguishing and grading papilloedema. These models provide an accessible, scalable solution for clinical teams without coding expertise to feasibly develop intelligent diagnostic systems to recognise and characterise papilloedema. Further external validation and prospective testing is needed to confirm their clinical utility and applicability in diverse settings. Key PointsQuestion: Can clinician-led, code-free deep learning models using automated machine learning (AutoML) accurately differentiate papilloedema from pseudopapilloedema using optic disc imaging? Findings: Three widely available AutoML platforms were used to develop models that successfully distinguish the presence and severity of papilloedema on optic disc imaging, with Amazon Rekognition demonstrating the highest performance. Meaning: AutoML may assist clinical teams, even those with limited coding expertise, in diagnosing papilloedema, potentially reducing the need for invasive investigations.
Page 556 of 7577568 results
Show
per page

Ready to Sharpen Your Edge?

Subscribe to join 7,600+ peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.