Sort by:
Page 68 of 2212205 results

D<sup>2</sup>-RD-UNet: A dual-stage dual-class framework with connectivity correction for hepatic vessels segmentation.

Cavicchioli M, Moglia A, Garret G, Puglia M, Vacavant A, Pugliese G, Cerveri P

pubmed logopapersJun 27 2025
Accurate segmentation of hepatic and portal veins is critical for preoperative planning in liver surgery, especially for resection and transplantation procedures. Extensive anatomical variability, pathological alterations, and inherent class imbalance between background and vascular structures challenge this task. Current state-of-the-art deep learning approaches often fail to generalize across patient variability or maintain vascular topology, thus limiting their clinical applicability. To overcome these limitations, we propose the D<sup>2</sup>-RD-UNet, a dual-stage, dual-class segmentation framework for hepatic and portal vessels. The D<sup>2</sup>-RD-UNet architecture employs dense and residual connections to improve feature propagation and segmentation accuracy. Our D<sup>2</sup>-RD-UNet integrates advanced data-driven preprocessing, a dual-path architecture for 3D and 4D data, with the latter concatenating computed tomography (CT) scans with four relevant vesselness filters (Sato, Frangi, OOF, and RORPO). The pipeline is completed by the first developed postprocessing multi-class vessel connectivity correction algorithm based on centerlines. Additionally, we introduce the first radius-based branching algorithm to evaluate the model's predictions locally, providing detailed insights into the accuracy of vascular reconstructions at different scales. In order to make up for the scarcity of well-annotated open datasets for hepatic vessels segmentation, we curated AIMS-HPV-385, a large, pathological, multi-class, and validated dataset on 385 CT scans. We trained different configurations of D<sup>2</sup>-RD-UNet and state-of-the-art models on 327 CTs of AIMS-HPV-385. Experimental results on the remaining 58 CTs of AIMS-HPV-385 and on the 20 CTs of 3D-IRCADb-01 demonstrate superior performances of the D<sup>2</sup>-RD-UNet variants over state-of-the-art methods, achieving robust generalization, preserving vascular continuity, and offering a reliable approach for liver vascular reconstructions.

HGTL: A hypergraph transfer learning framework for survival prediction of ccRCC.

Han X, Li W, Zhang Y, Li P, Zhu J, Zhang T, Wang R, Gao Y

pubmed logopapersJun 27 2025
The clinical diagnosis of clear cell renal cell carcinoma (ccRCC) primarily depends on histopathological analysis and computed tomography (CT). Although pathological diagnosis is regarded as the gold standard, invasive procedures such as biopsy carry the risk of tumor dissemination. Conversely, CT scanning offers a non-invasive alternative, but its resolution may be inadequate for detecting microscopic tumor features, which limits the performance of prognostic assessments. To address this issue, we propose a high-order correlation-driven method for predicting the survival of ccRCC using only CT images, achieving performance comparable to that of the pathological gold standard. The proposed method utilizes a cross-modal hypergraph neural network based on hypergraph transfer learning to perform high-order correlation modeling and semantic feature extraction from whole-slide pathological images and CT images. By employing multi-kernel maximum mean discrepancy, we transfer the high-order semantic features learned from pathological images to the CT-based hypergraph neural network channel. During the testing phase, high-precision survival predictions were achieved using only CT images, eliminating the need for pathological images. This approach not only reduces the risks associated with invasive examinations for patients but also significantly enhances clinical diagnostic efficiency. The proposed method was validated using four datasets: three collected from different hospitals and one from the public TCGA dataset. Experimental results indicate that the proposed method achieves higher concordance indices across all datasets compared to other methods.

FSDA-DG: Improving cross-domain generalizability of medical image segmentation with few source domain annotations.

Ye Z, Wang K, Lv W, Feng Q, Lu L

pubmed logopapersJun 27 2025
Deep learning-based medical image segmentation faces significant challenges arising from limited labeled data and domain shifts. While prior approaches have primarily addressed these issues independently, their simultaneous occurrence is common in medical imaging. A method that generalizes to unseen domains using only minimal annotations offers significant practical value due to reduced data annotation and development costs. In pursuit of this goal, we propose FSDA-DG, a novel solution to improve cross-domain generalizability of medical image segmentation with few single-source domain annotations. Specifically, our approach introduces semantics-guided semi-supervised data augmentation. This method divides images into global broad regions and semantics-guided local regions, and applies distinct augmentation strategies to enrich data distribution. Within this framework, both labeled and unlabeled data are transformed into extensive domain knowledge while preserving domain-invariant semantic information. Additionally, FSDA-DG employs a multi-decoder U-Net pipeline semi-supervised learning (SSL) network to improve domain-invariant representation learning through consistent prior assumption across multiple perturbations. By integrating data-level and model-level designs, FSDA-DG achieves superior performance compared to state-of-the-art methods in two challenging single domain generalization (SDG) tasks with limited annotations. The code is publicly available at https://github.com/yezanting/FSDA-DG.

Self-supervised learning for MRI reconstruction: a review and new perspective.

Li X, Huang J, Sun G, Yang Z

pubmed logopapersJun 26 2025
To review the latest developments in self-supervised deep learning (DL) techniques for magnetic resonance imaging (MRI) reconstruction, emphasizing their potential to overcome the limitations of supervised methods dependent on fully sampled k-space data. While DL has significantly advanced MRI, supervised approaches require large amounts of fully sampled k-space data for training-a major limitation given the impracticality and expense of acquiring such data clinically. Self-supervised learning has emerged as a promising alternative, enabling model training using only undersampled k-space data, thereby enhancing feasibility and driving research interest. We conducted a comprehensive literature review to synthesize recent progress in self-supervised DL for MRI reconstruction. The analysis focused on methods and architectures designed to improve image quality, reduce scanning time, and address data scarcity challenges, drawing from peer-reviewed publications and technical innovations in the field. Self-supervised DL holds transformative potential for MRI reconstruction, offering solutions to data limitations while maintaining image quality and accelerating scans. Key challenges include robustness across diverse anatomies, standardization of validation, and clinical integration. Future research should prioritize hybrid methodologies, domain-specific adaptations, and rigorous clinical validation. This review consolidates advancements and unresolved issues, providing a foundation for next-generation medical imaging technologies.

Morphology-based radiological-histological correlation on ultra-high-resolution energy-integrating detector CT using cadaveric human lungs: nodule and airway analysis.

Hata A, Yanagawa M, Ninomiya K, Kikuchi N, Kurashige M, Nishigaki D, Doi S, Yamagata K, Yoshida Y, Ogawa R, Tokuda Y, Morii E, Tomiyama N

pubmed logopapersJun 26 2025
To evaluate the depiction capability of fine lung nodules and airways using high-resolution settings on ultra-high-resolution energy-integrating detector CT (UHR-CT), incorporating large matrix sizes, thin-slice thickness, and iterative reconstruction (IR)/deep-learning reconstruction (DLR), using cadaveric human lungs and corresponding histological images. Images of 20 lungs were acquired using conventional CT (CCT), UHR-CT, and photon-counting detector CT (PCD-CT). CCT images were reconstructed with a 512 matrix and IR (CCT-512-IR). UHR-CT images were reconstructed with four settings by varying the matrix size and the reconstruction method: UHR-512-IR, UHR-1024-IR, UHR-2048-IR, and UHR-1024-DLR. Two imaging settings of PCD-CT were used: PCD-512-IR and PCD-1024-IR. CT images were visually evaluated and compared with histology. Overall, 6769 nodules (median: 1321 µm) and 92 airways (median: 851 µm) were evaluated. For nodules, UHR-2048-IR outperformed CCT-512-IR, UHR-512-IR, and UHR-1024-IR (p < 0.001). UHR-1024-DLR showed no significant difference from UHR-2048-IR in the overall nodule score after Bonferroni correction (uncorrected p = 0.043); however, for nodules > 1000 μm, UHR-2048-IR demonstrated significantly better scores than UHR-1024-DLR (p = 0.003). For airways, UHR-1024-IR and UHR-512-IR showed significant differences (p < 0.001), with no notable differences among UHR-1024-IR, UHR-2048-IR, and UHR-1024-DLR. UHR-2048-IR detected nodules and airways with median diameters of 604 µm and 699 µm, respectively. No significant difference was observed between UHR-512-IR and PCD-512-IR (p > 0.1). PCD-1024-IR outperformed UHR-CTs for nodules > 1000 μm (p ≤ 0.001), while UHR-1024-DLR outperformed PCD-1024-IR for airways > 1000 μm (p = 0.005). UHR-2048-IR demonstrated the highest scores among the evaluated EID-CT images. UHR-CT showed potential for detecting submillimeter nodules and airways. With the 512 matrix, UHR-CT demonstrated performance comparable to PCD-CT. Question There are scarce data evaluating the depiction capabilities of ultra-high-resolution energy-integrating detector CT (UHR-CT) for fine structures, nor any comparisons with photon-counting detector CT (PCD-CT). Findings UHR-CT depicted nodules and airways with median diameters of 604 µm and 699 µm, showing no significant difference from PCD-CT with the 512 matrix. Clinical relevance High-resolution imaging is crucial for lung diagnosis. UHR-CT has the potential to contribute to pulmonary nodule diagnosis and airway disease evaluation by detecting fine opacities and airways.

Deep Learning Model for Automated Segmentation of Orbital Structures in MRI Images.

Bakhshaliyeva E, Reiner LN, Chelbi M, Nawabi J, Tietze A, Scheel M, Wattjes M, Dell'Orco A, Meddeb A

pubmed logopapersJun 26 2025
Magnetic resonance imaging (MRI) is a crucial tool for visualizing orbital structures and detecting eye pathologies. However, manual segmentation of orbital anatomy is challenging due to the complexity and variability of the structures. Recent advancements in deep learning (DL), particularly convolutional neural networks (CNNs), offer promising solutions for automated segmentation in medical imaging. This study aimed to train and evaluate a U-Net-based model for the automated segmentation of key orbital structures. This retrospective study included 117 patients with various orbital pathologies who underwent orbital MRI. Manual segmentation was performed on four anatomical structures: the ocular bulb, ocular tumors, retinal detachment, and the optic nerve. Following the UNet autoconfiguration by nnUNet, we conducted a five-fold cross-validation and evaluated the model's performances using Dice Similarity Coefficient (DSC) and Relative Absolute Volume Difference (RAVD) as metrics. nnU-Net achieved high segmentation performance for the ocular bulb (mean DSC: 0.931) and the optic nerve (mean DSC: 0.820). Segmentation of ocular tumors (mean DSC: 0.788) and retinal detachment (mean DSC: 0.550) showed greater variability, with performance declining in more challenging cases. Despite these challenges, the model achieved high detection rates, with ROC AUCs of 0.90 for ocular tumors and 0.78 for retinal detachment. This study demonstrates nnU-Net's capability for accurate segmentation of orbital structures, particularly the ocular bulb and optic nerve. However, challenges remain in the segmentation of tumors and retinal detachment due to variability and artifacts. Future improvements in deep learning models and broader, more diverse datasets may enhance segmentation performance, ultimately aiding in the diagnosis and treatment of orbital pathologies.

Semi-automatic segmentation of elongated interventional instruments for online calibration of C-arm imaging system.

Chabi N, Illanes A, Beuing O, Behme D, Preim B, Saalfeld S

pubmed logopapersJun 26 2025
The C-arm biplane imaging system, designed for cerebral angiography, detects pathologies like aneurysms using dual rotating detectors for high-precision, real-time vascular imaging. However, accuracy can be affected by source-detector trajectory deviations caused by gravitational artifacts and mechanical instabilities. This study addresses calibration challenges and suggests leveraging interventional devices with radio-opaque markers to optimize C-arm geometry. We propose an online calibration method using image-specific features derived from interventional devices like guidewires and catheters (In the remainder of this paper, the term"catheter" will refer to both catheter and guidewire). The process begins with gantry-recorded data, refined through iterative nonlinear optimization. A machine learning approach detects and segments elongated devices by identifying candidates via thresholding on a weighted sum of curvature, derivative, and high-frequency indicators. An ensemble classifier segments these regions, followed by post-processing to remove false positives, integrating vessel maps, manual correction and identification markers. An interpolation step filling gaps along the catheter. Among the optimized ensemble classifiers, the one trained on the first frames achieved the best performance, with a specificity of 99.43% and precision of 86.41%. The calibration method was evaluated on three clinical datasets and four phantom angiogram pairs, reducing the mean backprojection error from 4.11 ± 2.61 to 0.15 ± 0.01 mm. Additionally, 3D accuracy analysis showed an average root mean square error of 3.47% relative to the true marker distance. This study explores using interventional tools with radio-opaque markers for C-arm self-calibration. The proposed method significantly reduces 2D backprojection error and 3D RMSE, enabling accurate 3D vascular reconstruction.

Improving Clinical Utility of Fetal Cine CMR Using Deep Learning Super-Resolution.

Vollbrecht TM, Hart C, Katemann C, Isaak A, Voigt MB, Pieper CC, Kuetting D, Geipel A, Strizek B, Luetkens JA

pubmed logopapersJun 26 2025
Fetal cardiovascular magnetic resonance is an emerging tool for prenatal congenital heart disease assessment, but long acquisition times and fetal movements limit its clinical use. This study evaluates the clinical utility of deep learning super-resolution reconstructions for rapidly acquired, low-resolution fetal cardiovascular magnetic resonance. This prospective study included participants with fetal congenital heart disease undergoing fetal cardiovascular magnetic resonance in the third trimester of pregnancy, with axial cine images acquired at normal resolution and low resolution. Low-resolution cine data was subsequently reconstructed using a deep learning super-resolution framework (cine<sub>DL</sub>). Acquisition times, apparent signal-to-noise ratio, contrast-to-noise ratio, and edge rise distance were assessed. Volumetry and functional analysis were performed. Qualitative image scores were rated on a 5-point Likert scale. Cardiovascular structures and pathological findings visible in cine<sub>DL</sub> images only were assessed. Statistical analysis included the Student paired <i>t</i> test and the Wilcoxon test. A total of 42 participants were included (median gestational age, 35.9 weeks [interquartile range (IQR), 35.1-36.4]). Cine<sub>DL</sub> acquisition was faster than cine images acquired at normal resolution (134±9.6 s versus 252±8.8 s; <i>P</i><0.001). Quantitative image quality metrics and image quality scores for cine<sub>DL</sub> were higher or comparable with those of cine images acquired at normal-resolution images (eg, fetal motion, 4.0 [IQR, 4.0-5.0] versus 4.0 [IQR, 3.0-4.0]; <i>P</i><0.001). Nonpatient-related artifacts (eg, backfolding) were more pronounced in Cine<sub>DL</sub> compared with cine images acquired at normal-resolution images (4.0 [IQR, 4.0-5.0] versus 5.0 [IQR, 3.0-4.0]; <i>P</i><0.001). Volumetry and functional results were comparable. Cine<sub>DL</sub> revealed additional structures in 10 of 42 fetuses (24%) and additional pathologies in 5 of 42 fetuses (12%), including partial anomalous pulmonary venous connection. Deep learning super-resolution reconstructions of low-resolution acquisitions shorten acquisition times and achieve diagnostic quality comparable with standard images, while being less sensitive to fetal bulk movements, leading to additional diagnostic findings. Therefore, deep learning super-resolution may improve the clinical utility of fetal cardiovascular magnetic resonance for accurate prenatal assessment of congenital heart disease.

Harnessing Generative AI for Lung Nodule Spiculation Characterization.

Wang Y, Patel C, Tchoua R, Furst J, Raicu D

pubmed logopapersJun 26 2025
Spiculation, characterized by irregular, spike-like projections from nodule margins, serves as a crucial radiological biomarker for malignancy assessment and early cancer detection. These distinctive stellate patterns strongly correlate with tumor invasiveness and are vital for accurate diagnosis and treatment planning. Traditional computer-aided diagnosis (CAD) systems are limited in their capability to capture and use these patterns given their subtlety, difficulty in quantifying them, and small datasets available to learn these patterns. To address these challenges, we propose a novel framework leveraging variational autoencoders (VAE) to discover, extract, and vary disentangled latent representations of lung nodule images. By gradually varying the latent representations of non-spiculated nodule images, we generate augmented datasets containing spiculated nodule variations that, we hypothesize, can improve the diagnostic classification of lung nodules. Using the National Institutes of Health/National Cancer Institute Lung Image Database Consortium (LIDC) dataset, our results show that incorporating these spiculated image variations into the classification pipeline significantly improves spiculation detection performance up to 7.53%. Notably, this enhancement in spiculation detection is achieved while preserving the classification performance of non-spiculated cases. This approach effectively addresses class imbalance and enhances overall classification outcomes. The gradual attenuation of spiculation characteristics demonstrates our model's ability to both capture and generate clinically relevant semantic features in an algorithmic manner. These findings suggest that the integration of semantic-based latent representations into CAD models not only enhances diagnostic accuracy but also provides insights into the underlying morphological progression of spiculated nodules, enabling more informed and clinically meaningful AI-driven support systems.

Development, deployment, and feature interpretability of a three-class prediction model for pulmonary diseases.

Cao Z, Xu G, Gao Y, Xu J, Tian F, Shi H, Yang D, Xie Z, Wang J

pubmed logopapersJun 26 2025
To develop a high-performance machine learning model for predicting and interpreting features of pulmonary diseases. This retrospective study analyzed clinical and imaging data from patients with non-small cell lung cancer (NSCLC), granulomatous inflammation, and benign tumors, collected across multiple centers from January 2015 to October 2023. Data from two hospitals in Anhui Province were split into a development set (n = 1696) and a test set (n = 424) in an 8:2 ratio, with an external validation set (n = 909) from Zhejiang Province. Features with p < 0.05 from univariate analyses were selected using the Boruta algorithm for input into Random Forest (RF) and XGBoost models. Model efficacy was assessed using receiver operating characteristic (ROC) analysis. A total of 3030 patients were included: 2269 with NSCLC, 529 with granulomatous inflammation, and 232 with benign tumors. The Obuchowski indices for RF and XGBoost in the test set were 0.7193 (95% CI: 0.6567-0.7812) and 0.8282 (95% CI: 0.7883-0.8650), respectively. In the external validation set, indices were 0.7932 (95% CI: 0.7572-0.8250) for RF and 0.8074 (95% CI: 0.7740-0.8387) for XGBoost. XGBoost achieved better accuracy in both the test (0.81) and external validation (0.79) sets. Calibration Curve and Decision Curve Analysis (DCA) showed XGBoost offered higher net clinical benefit. The XGBoost model outperforms RF in the three-class classification of lung diseases. XGBoost surpasses Random Forest in accurately classifying NSCLC, granulomatous inflammation, and benign tumors, offering superior clinical utility via multicenter data. Lung cancer classification model has broad clinical applicability. XGBoost outperforms random forests using CT imaging data. XGBoost model can be deployed on a website for clinicians.
Page 68 of 2212205 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.