Sort by:
Page 68 of 2342333 results

D<sup>2</sup>-RD-UNet: A dual-stage dual-class framework with connectivity correction for hepatic vessels segmentation.

Cavicchioli M, Moglia A, Garret G, Puglia M, Vacavant A, Pugliese G, Cerveri P

pubmed logopapersJun 27 2025
Accurate segmentation of hepatic and portal veins is critical for preoperative planning in liver surgery, especially for resection and transplantation procedures. Extensive anatomical variability, pathological alterations, and inherent class imbalance between background and vascular structures challenge this task. Current state-of-the-art deep learning approaches often fail to generalize across patient variability or maintain vascular topology, thus limiting their clinical applicability. To overcome these limitations, we propose the D<sup>2</sup>-RD-UNet, a dual-stage, dual-class segmentation framework for hepatic and portal vessels. The D<sup>2</sup>-RD-UNet architecture employs dense and residual connections to improve feature propagation and segmentation accuracy. Our D<sup>2</sup>-RD-UNet integrates advanced data-driven preprocessing, a dual-path architecture for 3D and 4D data, with the latter concatenating computed tomography (CT) scans with four relevant vesselness filters (Sato, Frangi, OOF, and RORPO). The pipeline is completed by the first developed postprocessing multi-class vessel connectivity correction algorithm based on centerlines. Additionally, we introduce the first radius-based branching algorithm to evaluate the model's predictions locally, providing detailed insights into the accuracy of vascular reconstructions at different scales. In order to make up for the scarcity of well-annotated open datasets for hepatic vessels segmentation, we curated AIMS-HPV-385, a large, pathological, multi-class, and validated dataset on 385 CT scans. We trained different configurations of D<sup>2</sup>-RD-UNet and state-of-the-art models on 327 CTs of AIMS-HPV-385. Experimental results on the remaining 58 CTs of AIMS-HPV-385 and on the 20 CTs of 3D-IRCADb-01 demonstrate superior performances of the D<sup>2</sup>-RD-UNet variants over state-of-the-art methods, achieving robust generalization, preserving vascular continuity, and offering a reliable approach for liver vascular reconstructions.

HGTL: A hypergraph transfer learning framework for survival prediction of ccRCC.

Han X, Li W, Zhang Y, Li P, Zhu J, Zhang T, Wang R, Gao Y

pubmed logopapersJun 27 2025
The clinical diagnosis of clear cell renal cell carcinoma (ccRCC) primarily depends on histopathological analysis and computed tomography (CT). Although pathological diagnosis is regarded as the gold standard, invasive procedures such as biopsy carry the risk of tumor dissemination. Conversely, CT scanning offers a non-invasive alternative, but its resolution may be inadequate for detecting microscopic tumor features, which limits the performance of prognostic assessments. To address this issue, we propose a high-order correlation-driven method for predicting the survival of ccRCC using only CT images, achieving performance comparable to that of the pathological gold standard. The proposed method utilizes a cross-modal hypergraph neural network based on hypergraph transfer learning to perform high-order correlation modeling and semantic feature extraction from whole-slide pathological images and CT images. By employing multi-kernel maximum mean discrepancy, we transfer the high-order semantic features learned from pathological images to the CT-based hypergraph neural network channel. During the testing phase, high-precision survival predictions were achieved using only CT images, eliminating the need for pathological images. This approach not only reduces the risks associated with invasive examinations for patients but also significantly enhances clinical diagnostic efficiency. The proposed method was validated using four datasets: three collected from different hospitals and one from the public TCGA dataset. Experimental results indicate that the proposed method achieves higher concordance indices across all datasets compared to other methods.

FSDA-DG: Improving cross-domain generalizability of medical image segmentation with few source domain annotations.

Ye Z, Wang K, Lv W, Feng Q, Lu L

pubmed logopapersJun 27 2025
Deep learning-based medical image segmentation faces significant challenges arising from limited labeled data and domain shifts. While prior approaches have primarily addressed these issues independently, their simultaneous occurrence is common in medical imaging. A method that generalizes to unseen domains using only minimal annotations offers significant practical value due to reduced data annotation and development costs. In pursuit of this goal, we propose FSDA-DG, a novel solution to improve cross-domain generalizability of medical image segmentation with few single-source domain annotations. Specifically, our approach introduces semantics-guided semi-supervised data augmentation. This method divides images into global broad regions and semantics-guided local regions, and applies distinct augmentation strategies to enrich data distribution. Within this framework, both labeled and unlabeled data are transformed into extensive domain knowledge while preserving domain-invariant semantic information. Additionally, FSDA-DG employs a multi-decoder U-Net pipeline semi-supervised learning (SSL) network to improve domain-invariant representation learning through consistent prior assumption across multiple perturbations. By integrating data-level and model-level designs, FSDA-DG achieves superior performance compared to state-of-the-art methods in two challenging single domain generalization (SDG) tasks with limited annotations. The code is publicly available at https://github.com/yezanting/FSDA-DG.

Towards automated multi-regional lung parcellation for 0.55-3T 3D T2w fetal MRI

Uus, A., Avena Zampieri, C., Downes, F., Egloff Collado, A., Hall, M., Davidson, J., Payette, K., Aviles Verdera, J., Grigorescu, I., Hajnal, J. V., Deprez, M., Aertsen, M., Hutter, J., Rutherford, M., Deprest, J., Story, L.

medrxiv logopreprintJun 26 2025
Fetal MRI is increasingly being employed in the diagnosis of fetal lung anomalies and segmentation-derived total fetal lung volumes are used as one of the parameters for prediction of neonatal outcomes. However, in clinical practice, segmentation is performed manually in 2D motion-corrupted stacks with thick slices which is time consuming and can lead to variations in estimated volumes. Furthermore, there is a known lack of consensus regarding a universal lung parcellation protocol and expected normal total lung volume formulas. The lungs are also segmented as one label without parcellation into lobes. In terms of automation, to the best of our knowledge, there have been no reported works on multi-lobe segmentation for fetal lung MRI. This work introduces the first automated deep learning segmentation pipeline for multi-regional lung segmentation for 3D motion-corrected T2w fetal body images for normal anatomy and congenital diaphragmatic hernia cases. The protocol for parcellation into 5 standard lobes was defined in the population-averaged 3D atlas. It was then used to generate a multi-label training dataset including 104 normal anatomy controls and 45 congenital diaphragmatic hernia cases from 0.55T, 1.5T and 3T acquisition protocols. The performance of 3D Attention UNet network was evaluated on 18 cases and showed good results for normal lung anatomy with expectedly lower Dice values for the ipsilateral lung. In addition, we also produced normal lung volumetry growth charts from 290 0.55T and 3T controls. This is the first step towards automated multi-regional fetal lung analysis for 3D fetal MRI.

Clinician-Led Code-Free Deep Learning for Detecting Papilloedema and Pseudopapilloedema Using Optic Disc Imaging

Shenoy, R., Samra, G. S., Sekhri, R., Yoon, H.-J., Teli, S., DeSilva, I., Tu, Z., Maconachie, G. D., Thomas, M. G.

medrxiv logopreprintJun 26 2025
ImportanceDifferentiating pseudopapilloedema from papilloedema is challenging, but critical for prompt diagnosis and to avoid unnecessary invasive procedures. Following diagnosis of papilloedema, objectively grading severity is important for determining urgency of management and therapeutic response. Automated machine learning (AutoML) has emerged as a promising tool for diagnosis in medical imaging and may provide accessible opportunities for consistent and accurate diagnosis and severity grading of papilloedema. ObjectiveThis study evaluates the feasibility of AutoML models for distinguishing the presence and severity of papilloedema using near infrared reflectance images (NIR) obtained from standard optical coherence tomography (OCT), comparing the performance of different AutoML platforms. Design, setting and participantsA retrospective cohort study was conducted using data from University Hospitals of Leicester, NHS Trust. The study involved 289 adults and children patients (813 images) who underwent optic nerve head-centred OCT imaging between 2021 and 2024. The dataset included patients with normal optic discs (69 patients, 185 images), papilloedema (135 patients, 372 images), and optic disc drusen (ODD) (85 patients, 256 images). AutoML platforms - Amazon Rekognition, Medic Mind (MM) and Google Vertex were evaluated for their ability to classify and grade papilloedema severity. Main outcomes and measuresTwo classification tasks were performed: (1) distinguishing papilloedema from normal discs and ODD; (2) grading papilloedema severity (mild/moderate vs. severe). Model performance was evaluated using area under the curve (AUC), precision, recall, F1 score, and confusion matrices for all six models. ResultsAmazon Rekognition outperformed the other platforms, achieving the highest AUC (0.90) and F1 score (0.81) in distinguishing papilloedema from normal/ODD. For papilloedema severity grading, Amazon Rekognition also performed best, with an AUC of 0.90 and F1 score of 0.79. Google Vertex and Medic Mind demonstrated good performance but had slightly lower accuracy and higher misclassification rates. Conclusions and relevanceThis evaluation of three widely available AutoML platforms using NIR images obtained from standard OCT shows promise in distinguishing and grading papilloedema. These models provide an accessible, scalable solution for clinical teams without coding expertise to feasibly develop intelligent diagnostic systems to recognise and characterise papilloedema. Further external validation and prospective testing is needed to confirm their clinical utility and applicability in diverse settings. Key PointsQuestion: Can clinician-led, code-free deep learning models using automated machine learning (AutoML) accurately differentiate papilloedema from pseudopapilloedema using optic disc imaging? Findings: Three widely available AutoML platforms were used to develop models that successfully distinguish the presence and severity of papilloedema on optic disc imaging, with Amazon Rekognition demonstrating the highest performance. Meaning: AutoML may assist clinical teams, even those with limited coding expertise, in diagnosing papilloedema, potentially reducing the need for invasive investigations.

Morphology-based radiological-histological correlation on ultra-high-resolution energy-integrating detector CT using cadaveric human lungs: nodule and airway analysis.

Hata A, Yanagawa M, Ninomiya K, Kikuchi N, Kurashige M, Nishigaki D, Doi S, Yamagata K, Yoshida Y, Ogawa R, Tokuda Y, Morii E, Tomiyama N

pubmed logopapersJun 26 2025
To evaluate the depiction capability of fine lung nodules and airways using high-resolution settings on ultra-high-resolution energy-integrating detector CT (UHR-CT), incorporating large matrix sizes, thin-slice thickness, and iterative reconstruction (IR)/deep-learning reconstruction (DLR), using cadaveric human lungs and corresponding histological images. Images of 20 lungs were acquired using conventional CT (CCT), UHR-CT, and photon-counting detector CT (PCD-CT). CCT images were reconstructed with a 512 matrix and IR (CCT-512-IR). UHR-CT images were reconstructed with four settings by varying the matrix size and the reconstruction method: UHR-512-IR, UHR-1024-IR, UHR-2048-IR, and UHR-1024-DLR. Two imaging settings of PCD-CT were used: PCD-512-IR and PCD-1024-IR. CT images were visually evaluated and compared with histology. Overall, 6769 nodules (median: 1321 µm) and 92 airways (median: 851 µm) were evaluated. For nodules, UHR-2048-IR outperformed CCT-512-IR, UHR-512-IR, and UHR-1024-IR (p < 0.001). UHR-1024-DLR showed no significant difference from UHR-2048-IR in the overall nodule score after Bonferroni correction (uncorrected p = 0.043); however, for nodules > 1000 μm, UHR-2048-IR demonstrated significantly better scores than UHR-1024-DLR (p = 0.003). For airways, UHR-1024-IR and UHR-512-IR showed significant differences (p < 0.001), with no notable differences among UHR-1024-IR, UHR-2048-IR, and UHR-1024-DLR. UHR-2048-IR detected nodules and airways with median diameters of 604 µm and 699 µm, respectively. No significant difference was observed between UHR-512-IR and PCD-512-IR (p > 0.1). PCD-1024-IR outperformed UHR-CTs for nodules > 1000 μm (p ≤ 0.001), while UHR-1024-DLR outperformed PCD-1024-IR for airways > 1000 μm (p = 0.005). UHR-2048-IR demonstrated the highest scores among the evaluated EID-CT images. UHR-CT showed potential for detecting submillimeter nodules and airways. With the 512 matrix, UHR-CT demonstrated performance comparable to PCD-CT. Question There are scarce data evaluating the depiction capabilities of ultra-high-resolution energy-integrating detector CT (UHR-CT) for fine structures, nor any comparisons with photon-counting detector CT (PCD-CT). Findings UHR-CT depicted nodules and airways with median diameters of 604 µm and 699 µm, showing no significant difference from PCD-CT with the 512 matrix. Clinical relevance High-resolution imaging is crucial for lung diagnosis. UHR-CT has the potential to contribute to pulmonary nodule diagnosis and airway disease evaluation by detecting fine opacities and airways.

Deep Learning Model for Automated Segmentation of Orbital Structures in MRI Images.

Bakhshaliyeva E, Reiner LN, Chelbi M, Nawabi J, Tietze A, Scheel M, Wattjes M, Dell'Orco A, Meddeb A

pubmed logopapersJun 26 2025
Magnetic resonance imaging (MRI) is a crucial tool for visualizing orbital structures and detecting eye pathologies. However, manual segmentation of orbital anatomy is challenging due to the complexity and variability of the structures. Recent advancements in deep learning (DL), particularly convolutional neural networks (CNNs), offer promising solutions for automated segmentation in medical imaging. This study aimed to train and evaluate a U-Net-based model for the automated segmentation of key orbital structures. This retrospective study included 117 patients with various orbital pathologies who underwent orbital MRI. Manual segmentation was performed on four anatomical structures: the ocular bulb, ocular tumors, retinal detachment, and the optic nerve. Following the UNet autoconfiguration by nnUNet, we conducted a five-fold cross-validation and evaluated the model's performances using Dice Similarity Coefficient (DSC) and Relative Absolute Volume Difference (RAVD) as metrics. nnU-Net achieved high segmentation performance for the ocular bulb (mean DSC: 0.931) and the optic nerve (mean DSC: 0.820). Segmentation of ocular tumors (mean DSC: 0.788) and retinal detachment (mean DSC: 0.550) showed greater variability, with performance declining in more challenging cases. Despite these challenges, the model achieved high detection rates, with ROC AUCs of 0.90 for ocular tumors and 0.78 for retinal detachment. This study demonstrates nnU-Net's capability for accurate segmentation of orbital structures, particularly the ocular bulb and optic nerve. However, challenges remain in the segmentation of tumors and retinal detachment due to variability and artifacts. Future improvements in deep learning models and broader, more diverse datasets may enhance segmentation performance, ultimately aiding in the diagnosis and treatment of orbital pathologies.

Semi-automatic segmentation of elongated interventional instruments for online calibration of C-arm imaging system.

Chabi N, Illanes A, Beuing O, Behme D, Preim B, Saalfeld S

pubmed logopapersJun 26 2025
The C-arm biplane imaging system, designed for cerebral angiography, detects pathologies like aneurysms using dual rotating detectors for high-precision, real-time vascular imaging. However, accuracy can be affected by source-detector trajectory deviations caused by gravitational artifacts and mechanical instabilities. This study addresses calibration challenges and suggests leveraging interventional devices with radio-opaque markers to optimize C-arm geometry. We propose an online calibration method using image-specific features derived from interventional devices like guidewires and catheters (In the remainder of this paper, the term"catheter" will refer to both catheter and guidewire). The process begins with gantry-recorded data, refined through iterative nonlinear optimization. A machine learning approach detects and segments elongated devices by identifying candidates via thresholding on a weighted sum of curvature, derivative, and high-frequency indicators. An ensemble classifier segments these regions, followed by post-processing to remove false positives, integrating vessel maps, manual correction and identification markers. An interpolation step filling gaps along the catheter. Among the optimized ensemble classifiers, the one trained on the first frames achieved the best performance, with a specificity of 99.43% and precision of 86.41%. The calibration method was evaluated on three clinical datasets and four phantom angiogram pairs, reducing the mean backprojection error from 4.11 ± 2.61 to 0.15 ± 0.01 mm. Additionally, 3D accuracy analysis showed an average root mean square error of 3.47% relative to the true marker distance. This study explores using interventional tools with radio-opaque markers for C-arm self-calibration. The proposed method significantly reduces 2D backprojection error and 3D RMSE, enabling accurate 3D vascular reconstruction.

Exploring the Design Space of 3D MLLMs for CT Report Generation

Mohammed Baharoon, Jun Ma, Congyu Fang, Augustin Toma, Bo Wang

arxiv logopreprintJun 26 2025
Multimodal Large Language Models (MLLMs) have emerged as a promising way to automate Radiology Report Generation (RRG). In this work, we systematically investigate the design space of 3D MLLMs, including visual input representation, projectors, Large Language Models (LLMs), and fine-tuning techniques for 3D CT report generation. We also introduce two knowledge-based report augmentation methods that improve performance on the GREEN score by up to 10\%, achieving the 2nd place on the MICCAI 2024 AMOS-MM challenge. Our results on the 1,687 cases from the AMOS-MM dataset show that RRG is largely independent of the size of LLM under the same training protocol. We also show that larger volume size does not always improve performance if the original ViT was pre-trained on a smaller volume size. Lastly, we show that using a segmentation mask along with the CT volume improves performance. The code is publicly available at https://github.com/bowang-lab/AMOS-MM-Solution

HyperSORT: Self-Organising Robust Training with hyper-networks

Samuel Joutard, Marijn Stollenga, Marc Balle Sanchez, Mohammad Farid Azampour, Raphael Prevost

arxiv logopreprintJun 26 2025
Medical imaging datasets often contain heterogeneous biases ranging from erroneous labels to inconsistent labeling styles. Such biases can negatively impact deep segmentation networks performance. Yet, the identification and characterization of such biases is a particularly tedious and challenging task. In this paper, we introduce HyperSORT, a framework using a hyper-network predicting UNets' parameters from latent vectors representing both the image and annotation variability. The hyper-network parameters and the latent vector collection corresponding to each data sample from the training set are jointly learned. Hence, instead of optimizing a single neural network to fit a dataset, HyperSORT learns a complex distribution of UNet parameters where low density areas can capture noise-specific patterns while larger modes robustly segment organs in differentiated but meaningful manners. We validate our method on two 3D abdominal CT public datasets: first a synthetically perturbed version of the AMOS dataset, and TotalSegmentator, a large scale dataset containing real unknown biases and errors. Our experiments show that HyperSORT creates a structured mapping of the dataset allowing the identification of relevant systematic biases and erroneous samples. Latent space clusters yield UNet parameters performing the segmentation task in accordance with the underlying learned systematic bias. The code and our analysis of the TotalSegmentator dataset are made available: https://github.com/ImFusionGmbH/HyperSORT
Page 68 of 2342333 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.