Sort by:
Page 35 of 1331328 results

Restorative artificial intelligence-driven implant dentistry for immediate implant placement with an interim crown: A clinical report.

Marques VR, Soh D, Cerqueira G, Orgev A

pubmed logopapersAug 14 2025
Immediate implant placement into the extraction socket based on a restoratively driven approach poses challenges which might compromise the delivery of an immediate interim restoration on the day of surgery. The fabricated digital design of the interim restoration may require modification before delivery and may not maintain the planned form to support the gingival architecture for the future prosthetic volume for the emergence profile. This report demonstrates how to utilize the artificial intelligence (AI)-assisted segmentation of bone and tooth to enhance restoratively driven planning for immediate implant placement with an immediate interim restoration. A fractured maxillary central incisor was extracted after cone beam computed tomography (CBCT) analysis. AI-assisted segmentation from the digital imaging and communications in medicine (DICOM) file was used to separate the tooth segmentation and alveolar bone for the digital implant planning and AI-assisted design of the interim restoration copied from the natural tooth contour, optimizing the emergence profile. Immediate implant placement was completed after minimally traumatic extraction, and the AI-assisted interim restoration was delivered immediately. The AI-assisted workflow enabled predictable implant positioning based on restorative needs, reducing surgical time and optimizing delivery of the interim restoration for improved clinical outcomes. The emergence profile of the anatomic crown copied from the AI-workflow for the interim restoration guided soft tissue healing effectively.

Beam Hardening Correction in Clinical X-ray Dark-Field Chest Radiography using Deep Learning-Based Bone Segmentation

Lennard Kaster, Maximilian E. Lochschmidt, Anne M. Bauer, Tina Dorosti, Sofia Demianova, Thomas Koehler, Daniela Pfeiffer, Franz Pfeiffer

arxiv logopreprintAug 14 2025
Dark-field radiography is a novel X-ray imaging modality that enables complementary diagnostic information by visualizing the microstructural properties of lung tissue. Implemented via a Talbot-Lau interferometer integrated into a conventional X-ray system, it allows simultaneous acquisition of perfectly temporally and spatially registered attenuation-based conventional and dark-field radiographs. Recent clinical studies have demonstrated that dark-field radiography outperforms conventional radiography in diagnosing and staging pulmonary diseases. However, the polychromatic nature of medical X-ray sources leads to beam-hardening, which introduces structured artifacts in the dark-field radiographs, particularly from osseous structures. This so-called beam-hardening-induced dark-field signal is an artificial dark-field signal and causes undesired cross-talk between attenuation and dark-field channels. This work presents a segmentation-based beam-hardening correction method using deep learning to segment ribs and clavicles. Attenuation contribution masks derived from dual-layer detector computed tomography data, decomposed into aluminum and water, were used to refine the material distribution estimation. The method was evaluated both qualitatively and quantitatively on clinical data from healthy subjects and patients with chronic obstructive pulmonary disease and COVID-19. The proposed approach reduces bone-induced artifacts and improves the homogeneity of the lung dark-field signal, supporting more reliable visual and quantitative assessment in clinical dark-field chest radiography.

SingleStrip: learning skull-stripping from a single labeled example

Bella Specktor-Fadida, Malte Hoffmann

arxiv logopreprintAug 14 2025
Deep learning segmentation relies heavily on labeled data, but manual labeling is laborious and time-consuming, especially for volumetric images such as brain magnetic resonance imaging (MRI). While recent domain-randomization techniques alleviate the dependency on labeled data by synthesizing diverse training images from label maps, they offer limited anatomical variability when very few label maps are available. Semi-supervised self-training addresses label scarcity by iteratively incorporating model predictions into the training set, enabling networks to learn from unlabeled data. In this work, we combine domain randomization with self-training to train three-dimensional skull-stripping networks using as little as a single labeled example. First, we automatically bin voxel intensities, yielding labels we use to synthesize images for training an initial skull-stripping model. Second, we train a convolutional autoencoder (AE) on the labeled example and use its reconstruction error to assess the quality of brain masks predicted for unlabeled data. Third, we select the top-ranking pseudo-labels to fine-tune the network, achieving skull-stripping performance on out-of-distribution data that approaches models trained with more labeled images. We compare AE-based ranking to consistency-based ranking under test-time augmentation, finding that the AE approach yields a stronger correlation with segmentation accuracy. Our results highlight the potential of combining domain randomization and AE-based quality control to enable effective semi-supervised segmentation from extremely limited labeled data. This strategy may ease the labeling burden that slows progress in studies involving new anatomical structures or emerging imaging techniques.

SimAQ: Mitigating Experimental Artifacts in Soft X-Ray Tomography using Simulated Acquisitions

Jacob Egebjerg, Daniel Wüstner

arxiv logopreprintAug 14 2025
Soft X-ray tomography (SXT) provides detailed structural insight into whole cells but is hindered by experimental artifacts such as the missing wedge and by limited availability of annotated datasets. We present \method, a simulation pipeline that generates realistic cellular phantoms and applies synthetic artifacts to produce paired noisy volumes, sinograms, and reconstructions. We validate our approach by training a neural network primarily on synthetic data and demonstrate effective few-shot and zero-shot transfer learning on real SXT tomograms. Our model delivers accurate segmentations, enabling quantitative analysis of noisy tomograms without relying on large labeled datasets or complex reconstruction methods.

An Adaptive Multi-Stage and Adjacent-Level Feature Integration Network for Brain Tumor Image Segmentation.

Zhou J, Wu Y, Xu Y, Liu W

pubmed logopapersAug 14 2025
The segmentation of brain tumor magnetic resonance imaging (MRI) plays a crucial role in assisting diagnosis, treatment planning, and disease progression evaluation. Convolutional neural networks (CNNs) and transformer-based methods have achieved significant progress due to their local and global feature extraction capabilities. However, similar to other medical image segmentation tasks, challenges remain in addressing issues such as blurred boundaries, small lesion volumes, and interwoven regions. General CNN and transformer approaches struggle to effectively resolve these issues. Therefore, a new multi-stage and adjacent-level feature integration network (MAI-Net) is introduced to overcome these challenges, thereby improving the overall segmentation accuracy. MAI-Net consists of dual-branch, multi-level structures and three innovative modules. The stage-level multi-scale feature extraction (SMFE) module focuses on capturing feature details from fine to coarse scales, improving detection of blurred edges and small lesions. The adjacent-level feature fusion (AFF) module facilitates information exchange across different levels, enhancing segmentation accuracy in complex regions as well as small volume lesions. Finally, the multi-stage feature fusion (MFF) module further integrates features from various levels to improve segmentation performance in complex regions. Extensive experiments on BraTS2020 and BraTS2021 datasets demonstrate that MAI-Net significantly outperforms existing methods in Dice and HD95 metrics. Furthermore, generalization experiments on a public ischemic stroke dataset confirm its robustness across different segmentation tasks. These results highlight the significant advantages of MAI-Net in addressing domain-specific challenges while maintaining strong generalization capabilities.

A novel unified Inception-U-Net hybrid gravitational optimization model (UIGO) incorporating automated medical image segmentation and feature selection for liver tumor detection.

Banerjee T, Singh DP, Kour P, Swain D, Mahajan S, Kadry S, Kim J

pubmed logopapersAug 14 2025
Segmenting liver tumors in medical imaging is pivotal for precise diagnosis, treatment, and evaluating therapy outcomes. Even with modern imaging technologies, fully automated segmentation systems have not overcome the challenge posed by the diversity in the shape, size, and texture of liver tumors. Such delays often hinder clinicians from making timely and accurate decisions. This study tries to resolve these issues with the development of UIGO. This new deep learning model merges U-Net and Inception networks, incorporating advanced feature selection and optimization strategies. The goals of UIGO include achieving high precision segmented results while maintaining optimal computational requirements for efficiency in real-world clinical use. Publicly available liver tumor segmentation datasets were used for testing the model: LiTS (Liver Tumor Segmentation Challenge), CHAOS (Combined Healthy Abdominal Organ Segmentation), and 3D-IRCADb1 (3D-IRCAD liver dataset). With various tumor shapes and sizes ranging across different imaging modalities such as CT and MRI, these datasets ensured comprehensive testing of UIGO's performance in diverse clinical scenarios. The experimental outcomes show the effectiveness of UIGO with a segmentation accuracy of 99.93%, an AUC score of 99.89%, a Dice Coefficient of 0.997, and an IoU of 0.998. UIGO demonstrated higher performance than other contemporary liver tumor segmentation techniques, indicating the system's ability to enhance clinician's ability to deliver precise and prompt evaluations at a lower computational expense. This study underscores the effort towards advanced streamlined, dependable, and clinically useful devices for liver tumor segmentation in medical imaging.

Automatic segmentation of cone beam CT images using treatment planning CT images in patients with prostate cancer.

Takayama Y, Kadoya N, Yamamoto T, Miyasaka Y, Kusano Y, Kajikawa T, Tomori S, Katsuta Y, Tanaka S, Arai K, Takeda K, Jingu K

pubmed logopapersAug 14 2025
Cone-beam computed tomography-based online adaptive radiotherapy (CBCT-based online ART) is currently used in clinical practice; however, deep learning-based segmentation of CBCT images remains challenging. Previous studies generated CBCT datasets for segmentation by adding contours outside clinical practice or synthesizing tissue contrast-enhanced diagnostic images paired with CBCT images. This study aimed to improve CBCT segmentation by matching the treatment planning CT (tpCT) image quality to CBCT images without altering the tpCT image or its contours. A deep-learning-based CBCT segmentation model was trained for the male pelvis using only the tpCT dataset. To bridge the quality gap between tpCT and routine CBCT images, an artificial pseudo-CBCT dataset was generated using Gaussian noise and Fourier domain adaptation (FDA) for 80 tpCT datasets (the hybrid FDA method). A five-fold cross-validation approach was used for model training. For comparison, atlas-based segmentation was performed with a registered tpCT dataset. The Dice similarity coefficient (DSC) assessed contour quality between the model-predicted and reference manual contours. The average DSC values for the clinical target volume, bladder, and rectum using the hybrid FDA method were 0.71 ± 0.08, 0.84 ± 0.08, and 0.78 ± 0.06, respectively. Conversely, the values for the model using plain tpCT were 0.40 ± 0.12, 0.17 ± 0.21, and 0.18 ± 0.14, and for the atlas-based model were 0.66 ± 0.13, 0.59 ± 0.16, and 0.66 ± 0.11, respectively. The segmentation model using the hybrid FDA method demonstrated significantly higher accuracy than models trained on plain tpCT datasets and those using atlas-based segmentation.

BSA-Net: Boundary-prioritized spatial adaptive network for efficient left atrial segmentation.

Xu F, Tu W, Feng F, Yang J, Gunawardhana M, Gu Y, Huang J, Zhao J

pubmed logopapersAug 13 2025
Atrial fibrillation, a common cardiac arrhythmia with rapid and irregular atrial electrical activity, requires accurate left atrial segmentation for effective treatment planning. Recently, deep learning methods have gained encouraging success in left atrial segmentation. However, current methodologies critically depend on the assumption of consistently complete centered left atrium as input, which neglects the structural incompleteness and boundary discontinuities arising from random-crop operations during inference. In this paper, we propose BSA-Net, which exploits an adaptive adjustment strategy in both feature position and loss optimization to establish long-range feature relationships and strengthen robust intermediate feature representations in boundary regions. Specifically, we propose a Spatial-adaptive Convolution (SConv) that employs a shuffle operation combined with lightweight convolution to directly establish cross-positional relationships within regions of potential relevance. Moreover, we develop the dual Boundary Prioritized loss, which enhances boundary precision by differentially weighting foreground and background boundaries, thus optimizing complex boundary regions. With the above technologies, the proposed method enjoys a better speed-accuracy trade-off compared to current methods. BSA-Net attains Dice scores of 92.55%, 91.42%, and 84.67% on the LA, Utah, and Waikato datasets, respectively, with a mere 2.16 M parameters-approximately 80% fewer than other contemporary state-of-the-art models. Extensive experimental results on three benchmark datasets have demonstrated that BSA-Net, consistently and significantly outperforms existing state-of-the-art methods.

In vivo variability of MRI radiomics features in prostate lesions assessed by a test-retest study with repositioning.

Zhang KS, Neelsen CJO, Wennmann M, Hielscher T, Kovacs B, Glemser PA, Görtz M, Stenzinger A, Maier-Hein KH, Huber J, Schlemmer HP, Bonekamp D

pubmed logopapersAug 13 2025
Despite academic success, radiomics-based machine learning algorithms have not reached clinical practice, partially due to limited repeatability/reproducibility. To address this issue, this work aims to identify a stable subset of radiomics features in prostate MRI for radiomics modelling. A prospective study was conducted in 43 patients who received a clinical MRI examination and a research exam with repetition of T2-weighted and two different diffusion-weighted imaging (DWI) sequences with repositioning in between. Radiomics feature (RF) extraction was performed from MRI segmentations accounting for intra-rater and inter-rater effects, and three different image normalization methods were compared. Stability of RFs was assessed using the concordance correlation coefficient (CCC) for different comparisons: rater effects, inter-scan (before and after repositioning) and inter-sequence (between the two diffusion-weighted sequences) variability. In total, only 64 out of 321 (~ 20%) extracted features demonstrated stability, defined as CCC ≥ 0.75 in all settings (5 high-b value, 7 ADC- and 52 T2-derived features). For DWI, primarily intensity-based features proved stable with no shape feature passing the CCC threshold. T2-weighted images possessed the largest number of stable features with multiple shape (7), intensity-based (7) and texture features (28). Z-score normalization for high-b value images and muscle-normalization for T2-weighted images were identified as suitable.

Exploring the robustness of TractOracle methods in RL-based tractography.

Levesque J, Théberge A, Descoteaux M, Jodoin PM

pubmed logopapersAug 13 2025
Tractography algorithms leverage diffusion MRI to reconstruct the fibrous architecture of the brain's white matter. Among machine learning approaches, reinforcement learning (RL) has emerged as a promising framework for tractography, outperforming traditional methods in several key aspects. TractOracle-RL, a recent RL-based approach, reduces false positives by incorporating anatomical priors into the training process via a reward-based mechanism. In this paper, we investigate four extensions of the original TractOracle-RL framework by integrating recent advances in RL, and we evaluate their performance across five diverse diffusion MRI datasets. Results demonstrate that combining an oracle with the RL framework consistently leads to robust and reliable tractography, regardless of the specific method or dataset used. We also introduce a novel RL training scheme called Iterative Reward Training (IRT), inspired by the Reinforcement Learning from Human Feedback (RLHF) paradigm. Instead of relying on human input, IRT leverages bundle filtering methods to iteratively refine the oracle's guidance throughout training. Experimental results show that RL methods trained with oracle feedback significantly outperform widely used tractography techniques in terms of accuracy and anatomical validity.
Page 35 of 1331328 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.