Sort by:
Page 38 of 1331328 results

LR-COBRAS: A logic reasoning-driven interactive medical image data annotation algorithm.

Zhou N, Cao J

pubmed logopapersAug 11 2025
The volume of image data generated in the medical field is continuously increasing. Manual annotation is both costly and prone to human error. Additionally, deep learning-based medical image algorithms rely on large, accurately annotated training datasets, which are expensive to produce and often result in instability. This study introduces LR-COBRAS, an interactive computer-aided data annotation algorithm designed for medical experts. LR-COBRAS aims to assist healthcare professionals in achieving more precise annotation outcomes through interactive processes, thereby optimizing medical image annotation tasks. The algorithm enhances must-link and cannot-link constraints during interactions through a logic reasoning module. It automatically generates potential constraint relationships, reducing the frequency of user interactions and improving clustering accuracy. By utilizing rules such as symmetry, transitivity, and consistency, LR-COBRAS effectively balances automation with clinical relevance. Experimental results based on the MedMNIST+ dataset and ChestX-ray8 dataset demonstrate that LR-COBRAS significantly outperforms existing methods in clustering accuracy, efficiency, and interactive burden, showcasing superior robustness and applicability. This algorithm provides a novel solution for intelligent medical image analysis. The source code for our implementation is available on https://github.com/cjw-bbxc/MILR-COBRAS.

C<sup>5</sup>-net: Cross-organ cross-modality cswin-transformer coupled convolutional network for dual task transfer learning in lymph node segmentation and classification.

Wang M, Chen H, Mao L, Jiao W, Han H, Zhang Q

pubmed logopapersAug 11 2025
Deep learning has made notable strides in the ultrasonic diagnosis of lymph nodes, yet it faces three primary challenges: a limited number of lymph node images and a scarcity of annotated data; difficulty in comprehensively learning both local and global semantic information; and obstacles in collaborative learning for both image segmentation and classification to achieve accurate diagnosis. To address these issues, we propose the Cross-organ Cross-modality Cswin-transformer Coupled Convolutional Network (C<sup>5</sup>-Net). First, we design a cross-organ and cross-modality transfer learning strategy to leverage skin lesion dermoscopic images, which have abundant annotations and share similarities in fields of view and morphology with the lymph node ultrasound images. Second, we couple Transformer and convolutional network to comprehensively learn both local details and global information. Third, the encoder weights in the C<sup>5</sup>-Net are shared between segmentation and classification tasks to exploit the synergistic knowledge, enhancing overall performance in ultrasound lymph node diagnosis. Our study leverages 690 lymph node ultrasound images and 1000 skin lesion dermoscopic images. Experimental results show that our C<sup>5</sup>-Net achieves the best segmentation and classification performance for lymph nodes among advanced methods, with the Dice coefficient of segmentation equaling 0.854, and the accuracy of classification equaling 0.874. Our method has consistently shown accuracy and robustness in the segmentation and classification of lymph nodes, contributing to the early and accurate detection of lymph nodal malignancy, which is potentially essential for effective treatment planning in clinical oncology.

Automated Prediction of Bone Volume Removed in Mastoidectomy.

Nagururu NV, Ishida H, Ding AS, Ishii M, Unberath M, Taylor RH, Munawar A, Sahu M, Creighton FX

pubmed logopapersAug 11 2025
The bone volume drilled by surgeons during mastoidectomy is determined by the need to localize the position, optimize the view, and reach the surgical endpoint while avoiding critical structures. Predicting the volume of bone removed before an operation can significantly enhance surgical training by providing precise, patient-specific guidance and enable the development of more effective computer-assisted and robotic surgical interventions. Single institution, cross-sectional. VR simulation. We developed a deep learning pipeline to automate the prediction of bone volume removed during mastoidectomy using data from virtual reality mastoidectomy simulations. The data set included 15 deidentified temporal bone computed tomography scans. The network was evaluated using fivefold cross-validation, comparing predicted and actual bone removal with metrics such as the Dice score (DSC) and Hausdorff distance (HD). Our method achieved a median DSC of 0.775 (interquartile range [IQR]: 0.725-0.810) and a median HD of 0.492 mm (IQR: 0.298-0.757 mm). Predictions reached the mastoidectomy endpoint of visualizing the horizontal canal and incus in 80% (12/15) of temporal bones. Qualitative analysis indicated that predictions typically produced realistic mastoidectomy endpoints, though some cases showed excessive or insufficient bone removal, particularly at the temporal bone cortex and tegmen mastoideum. This study establishes a foundational step in using deep learning to predict bone volume removal during mastoidectomy. The results indicate that learning-based methods can reasonably approximate the surgical endpoint of mastoidectomy. Further refinement with larger, more diverse data sets and improved model architectures will be essential for enhancing prediction accuracy.

Enhanced Liver Tumor Detection in CT Images Using 3D U-Net and Bat Algorithm for Hyperparameter Optimization

Nastaran Ghorbani, Bitasadat Jamshidi, Mohsen Rostamy-Malkhalifeh

arxiv logopreprintAug 11 2025
Liver cancer is one of the most prevalent and lethal forms of cancer, making early detection crucial for effective treatment. This paper introduces a novel approach for automated liver tumor segmentation in computed tomography (CT) images by integrating a 3D U-Net architecture with the Bat Algorithm for hyperparameter optimization. The method enhances segmentation accuracy and robustness by intelligently optimizing key parameters like the learning rate and batch size. Evaluated on a publicly available dataset, our model demonstrates a strong ability to balance precision and recall, with a high F1-score at lower prediction thresholds. This is particularly valuable for clinical diagnostics, where ensuring no potential tumors are missed is paramount. Our work contributes to the field of medical image analysis by demonstrating that the synergy between a robust deep learning architecture and a metaheuristic optimization algorithm can yield a highly effective solution for complex segmentation tasks.

Decoding fetal motion in 4D ultrasound with DeepLabCut.

Inubashiri E, Kaishi Y, Miyake T, Yamaguchi R, Hamaguchi T, Inubashiri M, Ota H, Watanabe Y, Deguchi K, Kuroki K, Maeda N

pubmed logopapersAug 11 2025
This study aimed to objectively and quantitatively analyze fetal motor behavior using DeepLabCut (DLC), a markerless posture estimation tool based on deep learning, applied to four-dimensional ultrasound (4DUS) data collected during the second trimester. We propose a novel clinical method for precise assessment of fetal neurodevelopment. Fifty 4DUS video recordings of normal singleton fetuses aged 12 to 22 gestational weeks were analyzed. Eight fetal joints were manually labeled in 2% of each video to train a customized DLC model. The model's accuracy was evaluated using likelihood scores. Intra- and inter-rater reliability of manual labeling were assessed using intraclass correlation coefficients (ICC). Angular velocity time series derived from joint coordinates were analyzed to quantify fetal movement patterns and developmental coordination. Manual labeling demonstrated excellent reproducibility (inter-rater ICC = 0.990, intra-rater ICC = 0.961). The trained DLC model achieved a mean likelihood score of 0.960, confirming high tracking accuracy. Kinematic analysis revealed developmental trends: localized rapid limb movements were common at 12-13 weeks; movements became more coordinated and systemic by 18-20 weeks, reflecting advancing neuromuscular maturation. Although a modest increase in tracking accuracy was observed with gestational age, this trend did not reach statistical significance (p < 0.001). DLC enables precise quantitative analysis of fetal motor behavior from 4DUS recordings. This AI-driven approach offers a promising, noninvasive alternative to conventional qualitative assessments, providing detailed insights into early fetal neurodevelopmental trajectories and potential early screening for neurodevelopmental disorders.

Ratio of visceral-to-subcutaneous fat area improves long-term mortality prediction over either measure alone: automated CT-based AI measures with longitudinal follow-up in a large adult cohort.

Liu D, Kuchnia AJ, Blake GM, Lee MH, Garrett JW, Pickhardt PJ

pubmed logopapersAug 11 2025
Fully automated AI-based algorithms can quantify adipose tissue on abdominal CT images. The aim of this study was to investigate the clinical value of these biomarkers by determining the association between adipose tissue measures and all-cause mortality. This retrospective study included 151,141 patients who underwent abdominal CT for any reason between 2000 and 2021. A validated AI-based algorithm quantified subcutaneous (SAT) and visceral (VAT) adipose tissue cross-sectional area. A visceral-to-subcutaneous adipose tissue area ratio (VSR) was calculated. Clinical data (age at the time of CT, sex, date of death, date of last contact) was obtained from a database search of the electronic health record. Hazard ratios (HR) and Kaplan-Meier curves assessed the relationship between adipose tissue measures and mortality. The endpoint of interest was all-cause mortality, with additional subgroup analysis including age and gender. 138,169 patients were included in the final analysis. Higher VSR was associated with increased mortality; this association was strongest in younger women (highest compared to lowest risk quartile HR 3.32 in 18-39y). Lower SAT was associated with increased mortality regardless of sex or age group (HR up to 1.63 in 18-39y). Higher VAT was associated with increased mortality in younger age groups, with the trend weakening and reversing with age; this association was stronger in women. AI-based CT measures of SAT, VAT, and VSR are predictive of mortality, with VSR being the highest performing fat area biomarker overall. These metrics tended to perform better for women and younger patients. Incorporating AI tools can augment patient assessment and management, improving outcome.

Deep Learning-Based Desikan-Killiany Parcellation of the Brain Using Diffusion MRI

Yousef Sadegheih, Dorit Merhof

arxiv logopreprintAug 11 2025
Accurate brain parcellation in diffusion MRI (dMRI) space is essential for advanced neuroimaging analyses. However, most existing approaches rely on anatomical MRI for segmentation and inter-modality registration, a process that can introduce errors and limit the versatility of the technique. In this study, we present a novel deep learning-based framework for direct parcellation based on the Desikan-Killiany (DK) atlas using only diffusion MRI data. Our method utilizes a hierarchical, two-stage segmentation network: the first stage performs coarse parcellation into broad brain regions, and the second stage refines the segmentation to delineate more detailed subregions within each coarse category. We conduct an extensive ablation study to evaluate various diffusion-derived parameter maps, identifying an optimal combination of fractional anisotropy, trace, sphericity, and maximum eigenvalue that enhances parellation accuracy. When evaluated on the Human Connectome Project and Consortium for Neuropsychiatric Phenomics datasets, our approach achieves superior Dice Similarity Coefficients compared to existing state-of-the-art models. Additionally, our method demonstrates robust generalization across different image resolutions and acquisition protocols, producing more homogeneous parcellations as measured by the relative standard deviation within regions. This work represents a significant advancement in dMRI-based brain segmentation, providing a precise, reliable, and registration-free solution that is critical for improved structural connectivity and microstructural analyses in both research and clinical applications. The implementation of our method is publicly available on github.com/xmindflow/DKParcellationdMRI.

Neonatal neuroimaging: from research to bedside practice.

Cizmeci MN, El-Dib M, de Vries LS

pubmed logopapersAug 11 2025
Neonatal neuroimaging is essential in research and clinical practice, offering important insights into brain development and neurologic injury mechanisms. Visualizing the brain enables researchers and clinicians to improve neonatal care and parental counselling through better diagnosis and prognostication of disease. Common neuroimaging modalities used in the neonatal intensive care unit (NICU) are cranial ultrasonography (cUS) and magnetic resonance imaging (MRI). Between these modalities, conventional MRI provides the optimal image resolution and detail about the developing brain, while advanced MRI techniques allow for the evaluation of tissue microstructure and functional networks. Over the last two decades, medical imaging techniques using brain MRI have rapidly progressed, and these advances have facilitated high-quality extraction of quantitative features as well as the implementation of novel devices for use in neurological disorders. Major advancements encompass the use of low-field dedicated MRI systems within the NICU and trials of ultralow-field portable MRI systems at the bedside. Additionally, higher-field magnets are utilized to enhance image quality, and ultrafast brain MRI is employed to decrease image acquisition time. Furthermore, the implementation of advanced MRI sequences, the application of machine learning algorithms, multimodal neuroimaging techniques, motion correction techniques, and novel modalities are used to visualize pathologies that are not visible to the human eye. In this narrative review, we will discuss the fundamentals of these neuroimaging modalities, and their clinical applications to explore the present landscape of neonatal neuroimaging from bench to bedside.

SynMatch: Rethinking Consistency in Medical Image Segmentation with Sparse Annotations

Zhiqiang Shen, Peng Cao, Xiaoli Liu, Jinzhu Yang, Osmar R. Zaiane

arxiv logopreprintAug 10 2025
Label scarcity remains a major challenge in deep learning-based medical image segmentation. Recent studies use strong-weak pseudo supervision to leverage unlabeled data. However, performance is often hindered by inconsistencies between pseudo labels and their corresponding unlabeled images. In this work, we propose \textbf{SynMatch}, a novel framework that sidesteps the need for improving pseudo labels by synthesizing images to match them instead. Specifically, SynMatch synthesizes images using texture and shape features extracted from the same segmentation model that generates the corresponding pseudo labels for unlabeled images. This design enables the generation of highly consistent synthesized-image-pseudo-label pairs without requiring any training parameters for image synthesis. We extensively evaluate SynMatch across diverse medical image segmentation tasks under semi-supervised learning (SSL), weakly-supervised learning (WSL), and barely-supervised learning (BSL) settings with increasingly limited annotations. The results demonstrate that SynMatch achieves superior performance, especially in the most challenging BSL setting. For example, it outperforms the recent strong-weak pseudo supervision-based method by 29.71\% and 10.05\% on the polyp segmentation task with 5\% and 10\% scribble annotations, respectively. The code will be released at https://github.com/Senyh/SynMatch.

The eyelid and pupil dynamics underlying stress levels in awake mice.

Zeng, H.

biorxiv logopreprintAug 10 2025
Stress is a natural response of the body to perceived threats, and it can have both positive and negative effects on brain hemodynamics. Stress-induced changes in pupil and eyelid size/shape have been used as a biomarker in several fMRI studies. However, there were limited knowledges regarding changes in behavior of pupil and eyelid dynamics, particularly on animal models. In the present study, the pupil and eyelid dynamics were carefully investigated and characterized in a newly developed awake rodent fMRI protocol. Leveraging deep learning techniques, the mouse pupil and eyelid diameters were extracted and analyzed during different training and imaging phases in the present project. Our findings demonstrate a consistent downwards trend in pupil and eyelid dynamics under a meticulously designed training protocol, suggesting that the behaviors of the pupil and eyelid can be served as reliable indicators of stress levels and motion artifacts in awake fMRI studies. The current recording platform not only enables the facilitation of awake animal MRI studies but also highlights its potential applications to numerous other research areas, owing to the non-invasive nature and straightforward implementation.
Page 38 of 1331328 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.