Sort by:
Page 2 of 983 results

TopoImages: Incorporating Local Topology Encoding into Deep Learning Models for Medical Image Classification

Pengfei Gu, Hongxiao Wang, Yejia Zhang, Huimin Li, Chaoli Wang, Danny Chen

arxiv logopreprintAug 3 2025
Topological structures in image data, such as connected components and loops, play a crucial role in understanding image content (e.g., biomedical objects). % Despite remarkable successes of numerous image processing methods that rely on appearance information, these methods often lack sensitivity to topological structures when used in general deep learning (DL) frameworks. % In this paper, we introduce a new general approach, called TopoImages (for Topology Images), which computes a new representation of input images by encoding local topology of patches. % In TopoImages, we leverage persistent homology (PH) to encode geometric and topological features inherent in image patches. % Our main objective is to capture topological information in local patches of an input image into a vectorized form. % Specifically, we first compute persistence diagrams (PDs) of the patches, % and then vectorize and arrange these PDs into long vectors for pixels of the patches. % The resulting multi-channel image-form representation is called a TopoImage. % TopoImages offers a new perspective for data analysis. % To garner diverse and significant topological features in image data and ensure a more comprehensive and enriched representation, we further generate multiple TopoImages of the input image using various filtration functions, which we call multi-view TopoImages. % The multi-view TopoImages are fused with the input image for DL-based classification, with considerable improvement. % Our TopoImages approach is highly versatile and can be seamlessly integrated into common DL frameworks. Experiments on three public medical image classification datasets demonstrate noticeably improved accuracy over state-of-the-art methods.

Deep Learning in Myocarditis: A Novel Approach to Severity Assessment

Nishimori, M., Otani, T., Asaumi, Y., Ohta-Ogo, K., Ikeda, Y., Amemiya, K., Noguchi, T., Izumi, C., Shinohara, M., Hatakeyama, K., Nishimura, K.

medrxiv logopreprintAug 2 2025
BackgroundMyocarditis is a life-threatening disease with significant hemodynamic risks during the acute phase. Although histopathological examination of myocardial biopsy specimens remains the gold standard for diagnosis, there is no established method for objectively quantifying cardiomyocyte damage. We aimed to develop an AI model to evaluate clinical myocarditis severity using comprehensive pathology data. MethodsWe retrospectively analyzed 314 patients (1076 samples) who underwent myocardial biopsy from 2002 to 2021 at the National Cerebrovascular Center. Among these patients, 158 were diagnosed with myocarditis based on the Dallas criteria. A Multiple Instance Learning (MIL) model served as a pre-trained classifier to detect myocarditis across whole-slide images. We then constructed two clinical severity-prediction models: (1) a logistic regression model (Model 1) using the density of inflammatory cells per unit area, and (2) a Transformer-based model (Model 2), which processed the top-ranked patches identified by the MIL model to predict clinical severe outcomes. ResultsModel 1 achieved an AUROC of 0.809, indicating a robust association between inflammatory cell density and severe myocarditis. In contrast, Model 2, the Transformer-based approach, yielded an AUROC of 0.993 and demonstrated higher accuracy and precision for severity prediction. Attention score visualizations showed that Model 2 captured both inflammatory cell infiltration and additional morphological features. These findings suggest that combining MIL with Transformer architectures enables more comprehensive identification of key histological markers associated with clinical severe disease. ConclusionsOur results highlight that a Transformer-based AI model analyzing whole-slide pathology images can accurately assess clinical myocarditis severity. Moreover, simply quantifying the extent of inflammatory cell infiltration also correlates strongly with clinical outcomes. These methods offer a promising avenue for improving diagnostic precision, guiding treatment decisions, and ultimately enhancing patient management. Future prospective studies are warranted to validate these models in broader clinical settings and facilitate their integration into routine pathological workflows. What is new?- This is the first study to apply an AI model for the diagnosis and severity assessment of myocarditis. - New evidence shows that inflammatory cell infiltration is related to the severity of myocarditis. - Using information from the entire tissue, not just inflammatory cells, allows for a more accurate assessment of myocarditis severity. What are the clinical implications?- The use of the AI model allows for an unprecedented histological evaluation of myocarditis severity, which can enhance early diagnosis and intervention strategies. - Rapid and precise assessments of myocarditis severity by the AI model can support clinicians in making timely and appropriate treatment decisions, potentially improving patient outcomes. - The incorporation of this AI model into clinical practice may streamline diagnostic workflows and optimize the allocation of medical resources, enhancing overall patient care.

Moving Beyond CT Body Composition Analysis: Using Style Transfer for Bringing CT-Based Fully-Automated Body Composition Analysis to T2-Weighted MRI Sequences.

Haubold J, Pollok OB, Holtkamp M, Salhöfer L, Schmidt CS, Bojahr C, Straus J, Schaarschmidt BM, Borys K, Kohnke J, Wen Y, Opitz M, Umutlu L, Forsting M, Friedrich CM, Nensa F, Hosch R

pubmed logopapersAug 1 2025
Deep learning for body composition analysis (BCA) is gaining traction in clinical research, offering rapid and automated ways to measure body features like muscle or fat volume. However, most current methods prioritize computed tomography (CT) over magnetic resonance imaging (MRI). This study presents a deep learning approach for automatic BCA using MR T2-weighted sequences. Initial BCA segmentations (10 body regions and 4 body parts) were generated by mapping CT segmentations from body and organ analysis (BOA) model to synthetic MR images created using an in-house trained CycleGAN. In total, 30 synthetic data pairs were used to train an initial nnU-Net V2 in 3D, and this preliminary model was then applied to segment 120 real T2-weighted MRI sequences from 120 patients (46% female) with a median age of 56 (interquartile range, 17.75), generating early segmentation proposals. These proposals were refined by human annotators, and nnU-Net V2 2D and 3D models were trained using 5-fold cross-validation on this optimized dataset of real MR images. Performance was evaluated using Sørensen-Dice, Surface Dice, and Hausdorff Distance metrics including 95% confidence intervals for cross-validation and ensemble models. The 3D ensemble segmentation model achieved the highest Dice scores for the body region classes: bone 0.926 (95% confidence interval [CI], 0.914-0.937), muscle 0.968 (95% CI, 0.961-0.975), subcutaneous fat 0.98 (95% CI, 0.971-0.986), nervous system 0.973 (95% CI, 0.965-0.98), thoracic cavity 0.978 (95% CI, 0.969-0.984), abdominal cavity 0.989 (95% CI, 0.986-0.991), mediastinum 0.92 (95% CI, 0.901-0.936), pericardium 0.945 (95% CI, 0.924-0.96), brain 0.966 (95% CI, 0.927-0.989), and glands 0.905 (95% CI, 0.886-0.921). Furthermore, body part 2D ensemble model reached the highest Dice scores for all labels: arms 0.952 (95% CI, 0.937-0.965), head + neck 0.965 (95% CI, 0.953-0.976), legs 0.978 (95% CI, 0.968-0.988), and torso 0.99 (95% CI, 0.988-0.991). The overall average Dice across body parts (2D = 0.971, 3D = 0.969, P = ns) and body regions (2D = 0.935, 3D = 0.955, P < 0.001) ensemble models indicates stable performance across all classes. The presented approach facilitates efficient and automated extraction of BCA parameters from T2-weighted MRI sequences, providing precise and detailed body composition information across various regions and body parts.

MR-AIV reveals <i>in vivo</i> brain-wide fluid flow with physics-informed AI.

Toscano JD, Guo Y, Wang Z, Vaezi M, Mori Y, Karniadakis GE, Boster KAS, Kelley DH

pubmed logopapersAug 1 2025
The circulation of cerebrospinal and interstitial fluid plays a vital role in clearing metabolic waste from the brain, and its disruption has been linked to neurological disorders. However, directly measuring brain-wide fluid transport-especially in the deep brain-has remained elusive. Here, we introduce magnetic resonance artificial intelligence velocimetry (MR-AIV), a framework featuring a specialized physics-informed architecture and optimization method that reconstructs three-dimensional fluid velocity fields from dynamic contrast-enhanced magnetic resonance imaging (DCE-MRI). MR-AIV unveils brain-wide velocity maps while providing estimates of tissue permeability and pressure fields-quantities inaccessible to other methods. Applied to the brain, MR-AIV reveals a functional landscape of interstitial and perivascular flow, quantitatively distinguishing slow diffusion-driven transport (∼ 0.1 µm/s) from rapid advective flow (∼ 3 µm/s). This approach enables new investigations into brain clearance mechanisms and fluid dynamics in health and disease, with broad potential applications to other porous media systems, from geophysics to tissue mechanics.

Enhanced stroke risk prediction in hypertensive patients through deep learning integration of imaging and clinical data.

Li H, Zhang T, Han G, Huang Z, Xiao H, Ni Y, Liu B, Lin W, Lin Y

pubmed logopapersJul 31 2025
Stroke is one of the leading causes of death and disability worldwide, with a significantly elevated incidence among individuals with hypertension. Conventional risk assessment methods primarily rely on a limited set of clinical parameters and often exclude imaging-derived structural features, resulting in suboptimal predictive accuracy. This study aimed to develop a deep learning-based multimodal stroke risk prediction model by integrating carotid ultrasound imaging with multidimensional clinical data to enable precise identification of high-risk individuals among hypertensive patients. A total of 2,176 carotid artery ultrasound images from 1,088 hypertensive patients were collected. ResNet50 was employed to automatically segment the carotid intima-media and extract key structural features. These imaging features, along with clinical variables such as age, blood pressure, and smoking history, were fused using a Vision Transformer (ViT) and fed into a Radial Basis Probabilistic Neural Network (RBPNN) for risk stratification. The model's performance was systematically evaluated using metrics including AUC, Dice coefficient, IoU, and Precision-Recall curves. The proposed multimodal fusion model achieved outstanding performance on the test set, with an AUC of 0.97, a Dice coefficient of 0.90, and an IoU of 0.80. Ablation studies demonstrated that the inclusion of ViT and RBPNN modules significantly enhanced predictive accuracy. Subgroup analysis further confirmed the model's robust performance in high-risk populations, such as those with diabetes or smoking history. The deep learning-based multimodal fusion model effectively integrates carotid ultrasound imaging and clinical features, significantly improving the accuracy of stroke risk prediction in hypertensive patients. The model demonstrates strong generalizability and clinical application potential, offering a valuable tool for early screening and personalized intervention planning for stroke prevention. Not applicable.

Topology Optimization in Medical Image Segmentation with Fast Euler Characteristic

Liu Li, Qiang Ma, Cheng Ouyang, Johannes C. Paetzold, Daniel Rueckert, Bernhard Kainz

arxiv logopreprintJul 31 2025
Deep learning-based medical image segmentation techniques have shown promising results when evaluated based on conventional metrics such as the Dice score or Intersection-over-Union. However, these fully automatic methods often fail to meet clinically acceptable accuracy, especially when topological constraints should be observed, e.g., continuous boundaries or closed surfaces. In medical image segmentation, the correctness of a segmentation in terms of the required topological genus sometimes is even more important than the pixel-wise accuracy. Existing topology-aware approaches commonly estimate and constrain the topological structure via the concept of persistent homology (PH). However, these methods are difficult to implement for high dimensional data due to their polynomial computational complexity. To overcome this problem, we propose a novel and fast approach for topology-aware segmentation based on the Euler Characteristic ($\chi$). First, we propose a fast formulation for $\chi$ computation in both 2D and 3D. The scalar $\chi$ error between the prediction and ground-truth serves as the topological evaluation metric. Then we estimate the spatial topology correctness of any segmentation network via a so-called topological violation map, i.e., a detailed map that highlights regions with $\chi$ errors. Finally, the segmentation results from the arbitrary network are refined based on the topological violation maps by a topology-aware correction network. Our experiments are conducted on both 2D and 3D datasets and show that our method can significantly improve topological correctness while preserving pixel-wise segmentation accuracy.

Topology Optimization in Medical Image Segmentation with Fast Euler Characteristic

Liu Li, Qiang Ma, Cheng Ouyang, Johannes C. Paetzold, Daniel Rueckert, Bernhard Kainz

arxiv logopreprintJul 31 2025
Deep learning-based medical image segmentation techniques have shown promising results when evaluated based on conventional metrics such as the Dice score or Intersection-over-Union. However, these fully automatic methods often fail to meet clinically acceptable accuracy, especially when topological constraints should be observed, e.g., continuous boundaries or closed surfaces. In medical image segmentation, the correctness of a segmentation in terms of the required topological genus sometimes is even more important than the pixel-wise accuracy. Existing topology-aware approaches commonly estimate and constrain the topological structure via the concept of persistent homology (PH). However, these methods are difficult to implement for high dimensional data due to their polynomial computational complexity. To overcome this problem, we propose a novel and fast approach for topology-aware segmentation based on the Euler Characteristic ($\chi$). First, we propose a fast formulation for $\chi$ computation in both 2D and 3D. The scalar $\chi$ error between the prediction and ground-truth serves as the topological evaluation metric. Then we estimate the spatial topology correctness of any segmentation network via a so-called topological violation map, i.e., a detailed map that highlights regions with $\chi$ errors. Finally, the segmentation results from the arbitrary network are refined based on the topological violation maps by a topology-aware correction network. Our experiments are conducted on both 2D and 3D datasets and show that our method can significantly improve topological correctness while preserving pixel-wise segmentation accuracy.

SAMSA: Segment Anything Model Enhanced with Spectral Angles for Hyperspectral Interactive Medical Image Segmentation

Alfie Roddan, Tobias Czempiel, Chi Xu, Daniel S. Elson, Stamatia Giannarou

arxiv logopreprintJul 31 2025
Hyperspectral imaging (HSI) provides rich spectral information for medical imaging, yet encounters significant challenges due to data limitations and hardware variations. We introduce SAMSA, a novel interactive segmentation framework that combines an RGB foundation model with spectral analysis. SAMSA efficiently utilizes user clicks to guide both RGB segmentation and spectral similarity computations. The method addresses key limitations in HSI segmentation through a unique spectral feature fusion strategy that operates independently of spectral band count and resolution. Performance evaluation on publicly available datasets has shown 81.0% 1-click and 93.4% 5-click DICE on a neurosurgical and 81.1% 1-click and 89.2% 5-click DICE on an intraoperative porcine hyperspectral dataset. Experimental results demonstrate SAMSA's effectiveness in few-shot and zero-shot learning scenarios and using minimal training examples. Our approach enables seamless integration of datasets with different spectral characteristics, providing a flexible framework for hyperspectral medical image analysis.

Topology Optimization in Medical Image Segmentation with Fast χ Euler Characteristic.

Li L, Ma Q, Oyang C, Paetzold JC, Rueckert D, Kainz B

pubmed logopapersJul 28 2025
Deep learning-based medical image segmentation techniques have shown promising results when evaluated based on conventional metrics such as the Dice score or Intersection-over-Union. However, these fully automatic methods often fail to meet clinically acceptable accuracy, especially when topological constraints should be observed, e.g., continuous boundaries or closed surfaces. In medical image segmentation, the correctness of a segmentation in terms of the required topological genus sometimes is even more important than the pixel-wise accuracy. Existing topology-aware approaches commonly estimate and constrain the topological structure via the concept of persistent homology (PH). However, these methods are difficult to implement for high dimensional data due to their polynomial computational complexity. To overcome this problem, we propose a novel and fast approach for topology-aware segmentation based on the Euler Characteristic (χ). First, we propose a fast formulation for χ computation in both 2D and 3D. The scalar χ error between the prediction and ground-truth serves as the topological evaluation metric. Then we estimate the spatial topology correctness of any segmentation network via a so-called topological violation map, i.e., a detailed map that highlights regions with χ errors. Finally, the segmentation results from the arbitrary network are refined based on the topological violation maps by a topology-aware correction network. Our experiments are conducted on both 2D and 3D datasets and show that our method can significantly improve topological correctness while preserving pixel-wise segmentation accuracy.

Innovations in gender affirmation: AI-enhanced surgical guides for mandibular facial feminization surgery.

Beyer M, Abazi S, Tourbier C, Burde A, Vinayahalingam S, Ileșan RR, Thieringer FM

pubmed logopapersJul 25 2025
This study presents a fully automated digital workflow using artificial intelligence (AI) to create patient-specific cutting guides for mandible-angle osteotomies in facial feminization surgery (FFS). The goal is to achieve predictable, accurate, and safe results with minimal user input, addressing the time and effort required for conventional guide creation. Three-dimensional CT images of 30 male patients were used to develop and validate a workflow that automates two key processes: (1) segmentation of the mandible using a convolutional neural network (3D U-Net architecture) and (2) virtual design of osteotomy-specific cutting guides. Segmentation accuracy was assessed through comparison with expert manual segmentations using the dice similarity coefficient (DSC) and mean surface distance (MSD). The precision of the cutting guides was evaluated based on osteotomy line accuracy and fit. Workflow efficiency was measured by comparing the time required for automated versus manual planning by expert and novice users. The AI-based workflow achieved a median DSC of 0.966 and a median MSD of 0.212 mm, demonstrating high accuracy. The median planning time was reduced to 1 min and 38 s with the automated system, compared to 19 min and 37 s for an expert and 26 min and 39 s for a novice, representing 10- and 16-fold time reductions, respectively. The AI-based workflow is accurate, efficient, and cost-effective, significantly reducing planning time while maintaining clinical precision. This workflow improves surgical outcomes with precise and reliable cutting guides, enhancing efficiency and accessibility for clinicians, including those with limited experience in designing cutting guides.
Page 2 of 983 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.