Sort by:
Page 1 of 44436 results
Next

SurgPointTransformer: transformer-based vertebra shape completion using RGB-D imaging.

Massalimova A, Liebmann F, Jecklin S, Carrillo F, Farshad M, Fürnstahl P

pubmed logopapersDec 1 2025
State-of-the-art computer- and robot-assisted surgery systems rely on intraoperative imaging technologies such as computed tomography and fluoroscopy to provide detailed 3D visualizations of patient anatomy. However, these methods expose both patients and clinicians to ionizing radiation. This study introduces a radiation-free approach for 3D spine reconstruction using RGB-D data. Inspired by the "mental map" surgeons form during procedures, we present SurgPointTransformer, a shape completion method that reconstructs unexposed spinal regions from sparse surface observations. The method begins with a vertebra segmentation step that extracts vertebra-level point clouds for subsequent shape completion. SurgPointTransformer then uses an attention mechanism to learn the relationship between visible surface features and the complete spine structure. The approach is evaluated on an <i>ex vivo</i> dataset comprising nine samples, with CT-derived data used as ground truth. SurgPointTransformer significantly outperforms state-of-the-art baselines, achieving a Chamfer distance of 5.39 mm, an F-score of 0.85, an Earth mover's distance of 11.00 and a signal-to-noise ratio of 22.90 dB. These results demonstrate the potential of our method to reconstruct 3D vertebral shapes without exposing patients to ionizing radiation. This work contributes to the advancement of computer-aided and robot-assisted surgery by enhancing system perception and intelligence.

Application of Artificial Intelligence in rheumatic disease classification: an example of ankylosing spondylitis severity inspection model.

Chen CW, Tsai HH, Yeh CY, Yang CK, Tsou HK, Leong PY, Wei JC

pubmed logopapersDec 1 2025
The development of the Artificial Intelligence (AI)-based severity inspection model for ankylosing spondylitis (AS) could support health professionals to rapidly assess the severity of the disease, enhance proficiency, and reduce the demands of human resources. This paper aims to develop an AI-based severity inspection model for AS using patients' X-ray images and modified Stoke Ankylosing Spondylitis Spinal Score (mSASSS). The numerical simulation with AI is developed following the progress of data preprocessing, building and testing the model, and then the model. The training data is preprocessed by inviting three experts to check the X-ray images of 222 patients following the Gold Standard. The model is then developed through two stages, including keypoint detection and mSASSS evaluation. The two-stage AI-based severity inspection model for AS was developed to automatically detect spine points and evaluate mSASSS scores. At last, the data obtained from the developed model was compared with those from experts' assessment to analyse the accuracy of the model. The study was conducted in accordance with the ethical principles outlined in the Declaration of Helsinki. The spine point detection at the first stage achieved 1.57 micrometres in mean error distance with the ground truth, and the second stage of the classification network can reach 0.81 in mean accuracy. The model can correctly identify 97.4% patches belonging to mSASSS score 3, while those belonging to score 0 can still be classified into scores 1 or 2. The automatic severity inspection model for AS developed in this paper is accurate and can support health professionals in rapidly assessing the severity of AS, enhancing assessment proficiency, and reducing the demands of human resources.

Establishment and evaluation of an automatic multi?sequence MRI segmentation model of primary central nervous system lymphoma based on the nnU?Net deep learning network method.

Wang T, Tang X, Du J, Jia Y, Mou W, Lu G

pubmed logopapersJul 1 2025
Accurate quantitative assessment using gadolinium-contrast magnetic resonance imaging (MRI) is crucial in therapy planning, surveillance and prognostic assessment of primary central nervous system lymphoma (PCNSL). The present study aimed to develop a multimodal artificial intelligence deep learning segmentation model to address the challenges associated with traditional 2D measurements and manual volume assessments in MRI. Data from 49 pathologically-confirmed patients with PCNSL from six Chinese medical centers were analyzed, and regions of interest were manually segmented on contrast-enhanced T1-weighted and T2-weighted MRI scans for each patient, followed by fully automated voxel-wise segmentation of tumor components using a 3-dimenstional convolutional deep neural network. Furthermore, the efficiency of the model was evaluated using practical indicators and its consistency and accuracy was compared with traditional methods. The performance of the models were assessed using the Dice similarity coefficient (DSC). The Mann-Whitney U test was used to compare continuous clinical variables and the χ<sup>2</sup> test was used for comparisons between categorical clinical variables. T1WI sequences exhibited the optimal performance (training dice: 0.923, testing dice: 0.830, outer validation dice: 0.801), while T2WI showed a relatively poor performance (training dice of 0.761, a testing dice of 0.647, and an outer validation dice of 0.643. In conclusion, the automatic multi-sequences MRI segmentation model for PCNSL in the present study displayed high spatial overlap ratio and similar tumor volume with routine manual segmentation, indicating its significant potential.

[A deep learning method for differentiating nasopharyngeal carcinoma and lymphoma based on MRI].

Tang Y, Hua H, Wang Y, Tao Z

pubmed logopapersJul 1 2025
<b>Objective:</b>To development a deep learning(DL) model based on conventional MRI for automatic segmentation and differential diagnosis of nasopharyngeal carcinoma(NPC) and nasopharyngeal lymphoma(NPL). <b>Methods:</b>The retrospective study included 142 patients with NPL and 292 patients with NPC who underwent conventional MRI at Renmin Hospital of Wuhan University from June 2012 to February 2023. MRI from 80 patients were manually segmented to train the segmentation model. The automatically segmented regions of interest(ROIs) formed four datasets: T1 weighted images(T1WI), T2 weighted images(T2WI), T1 weighted contrast-enhanced images(T1CE), and a combination of T1WI and T2WI. The ImageNet-pretrained ResNet101 model was fine-tuned for the classification task. Statistical analysis was conducted using SPSS 22.0. The Dice coefficient loss was used to evaluate performance of segmentation task. Diagnostic performance was assessed using receiver operating characteristic(ROC) curves. Gradient-weighted class activation mapping(Grad-CAM) was imported to visualize the model's function. <b>Results:</b>The DICE score of the segmentation model reached 0.876 in the testing set. The AUC values of classification models in testing set were as follows: T1WI: 0.78(95%<i>CI</i> 0.67-0.81), T2WI: 0.75(95%<i>CI</i> 0.72-0.86), T1CE: 0.84(95%<i>CI</i> 0.76-0.87), and T1WI+T2WI: 0.93(95%<i>CI</i> 0.85-0.94). The AUC values for the two clinicians were 0.77(95%<i>CI</i> 0.72-0.82) for the junior, and 0.84(95%<i>CI</i> 0.80-0.89) for the senior. Grad-CAM analysis revealed that the central region of the tumor was highly correlated with the model's classification decisions, while the correlation was lower in the peripheral regions. <b>Conclusion:</b>The deep learning model performed well in differentiating NPC from NPL based on conventional MRI. The T1WI+T2WI combination model exhibited the best performance. The model can assist in the early diagnosis of NPC and NPL, facilitating timely and standardized treatment, which may improve patient prognosis.

Mechanically assisted non-invasive ventilation for liver SABR: Improve CBCT, treat more accurately.

Pierrard J, Audag N, Massih CA, Garcia MA, Moreno EA, Colot A, Jardinet S, Mony R, Nevez Marques AF, Servaes L, Tison T, den Bossche VV, Etume AW, Zouheir L, Ooteghem GV

pubmed logopapersJul 1 2025
Cone-beam computed tomography (CBCT) for image-guided radiotherapy (IGRT) during liver stereotactic ablative radiotherapy (SABR) is degraded by respiratory motion artefacts, potentially jeopardising treatment accuracy. Mechanically assisted non-invasive ventilation-induced breath-hold (MANIV-BH) can reduce these artefacts. This study compares MANIV-BH and free-breathing CBCTs regarding image quality, IGRT variability, automatic registration accuracy, and deep-learning auto-segmentation performance. Liver SABR CBCTs were presented blindly to 14 operators: 25 patients with FB and 25 with MANIV-BH. They rated CBCT quality and IGRT ease (rigid registration with planning CT). Interoperator IGRT variability was compared between FB and MANIV-BH. Automatic gross tumour volume (GTV) mapping accuracy was compared using automatic rigid registration and image-guided deformable registration. Deep-learning organ-at-risk (OAR) auto-segmentation was rated by an operator, who recorded the time dedicated for manual correction of these volumes. MANIV-BH significantly improved CBCT image quality ("Excellent"/"Good": 83.4 % versus 25.4 % with FB, p < 0.001), facilitated IGRT ("Very easy"/"Easy": 68.0 % versus 38.9 % with FB, p < 0.001), and reduced IGRT variability, particularly for trained operators (overall variability of 3.2 mm versus 4.6 mm with FB, p = 0.010). MANIV-BH improved deep-learning auto-segmentation performance (80.0 % rated "Excellent"/"Good" versus 4.0 % with FB, p < 0.001), and reduced median manual correction time by 54.2 % compared to FB (p < 0.001). However, automatic GTV mapping accuracy was not significantly different between MANIV-BH and FB. In liver SABR, MANIV-BH significantly improves CBCT quality, reduces interoperator IGRT variability, and enhances OAR auto-segmentation. Beyond being safe and effective for respiratory motion mitigation, MANIV increases accuracy during treatment delivery, although its implementation requires resources.

Intermuscular adipose tissue and lean muscle mass assessed with MRI in people with chronic back pain in Germany: a retrospective observational study.

Ziegelmayer S, Häntze H, Mertens C, Busch F, Lemke T, Kather JN, Truhn D, Kim SH, Wiestler B, Graf M, Kader A, Bamberg F, Schlett CL, Weiss JB, Schulz-Menger J, Ringhof S, Can E, Pischon T, Niendorf T, Lammert J, Schulze M, Keil T, Peters A, Hadamitzky M, Makowski MR, Adams L, Bressem K

pubmed logopapersJul 1 2025
Chronic back pain (CBP) affects over 80 million people in Europe, contributing to substantial healthcare costs and disability. Understanding modifiable risk factors, such as muscle composition, may aid in prevention and treatment. This study investigates the association between lean muscle mass (LMM) and intermuscular adipose tissue (InterMAT) with CBP using noninvasive whole-body magnetic resonance imaging (MRI). This cross-sectional analysis used whole-body MRI data from 30,868 participants in the German National Cohort (NAKO), collected between 1 May 2014 and 1 September 2019. CBP was defined as back pain persisting >3 months. LMM and InterMAT were quantified via MRI-based muscle segmentations using a validated deep learning model. Associations were analyzed using mixed logistic regression, adjusting for age, sex, diabetes, dyslipidemia, osteoporosis, osteoarthritis, physical activity, and study site. Among 27,518 participants (n = 12,193/44.3% female, n = 14,605/55.7% male; median age 49 years IQR 41; 57), 21.8% (n = 6003; n = 2999/50.0% female, n = 3004/50% male; median age 53 years IQR 46; 60) reported CBP, compared to 78.2% (n = 21,515; n = 9194/42.7% female, n = 12,321/57.3% male; median age 48 years IQR 39; 56) who did not. CBP prevalence was highest in those with low (<500 MET min/week) or high (>5000 MET min/week) self-reported physical activity levels (24.6% (n = 10,892) and 22.0% (n = 3800), respectively) compared to moderate (500-5000 MET min/week) levels (19.4% (n = 12,826); p < 0.0001). Adjusted analyses revealed that a higher InterMAT (OR 1.22 per 2-unit Z-score; 95% CI 1.13-1.30; p < 0.0001) was associated with an increased likelihood of chronic back pain (CBP), whereas higher lean muscle mass (LMM) (OR 0.87 per 2-unit Z-score; 95% CI 0.79-0.95; p = 0.003) was associated with a reduced likelihood of CBP. Stratified analyses confirmed these associations persisted in individuals with osteoarthritis (OA-CBP LMM: 22.9 cm<sup>3</sup>/kg/m; InterMAT: 7.53% vs OA-No CBP LMM: 24.3 cm<sup>3</sup>/kg/m; InterMAT: 6.96% both p < 0.0001) and osteoporosis (OP-CBP LMM: 20.9 cm<sup>3</sup>/kg/m; InterMAT: 8.43% vs OP-No CBP LMM: 21.3 cm<sup>3</sup>/kg/m; InterMAT: 7.9% p = 0.16 and p = 0.0019). Higher pain intensity (Pain Intensity Numerical Rating Scale ≥4) correlated with lower LMM (2-unit Z-score deviation = OR, 0.63; 95% CI, 0.57-0.70; p < 0.0001) and higher InterMAT (2-unit Z-score deviation = OR, 1.22; 95% CI, 1.13-1.30; p < 0.0001), independent of physical activity, osteoporosis and osteoarthritis. This large, population-based study highlights the associations of InterMAT and LMM with CBP. Given the limitations of the cross-sectional design, our findings can be seen as an impetus for further causal investigations within a broader, multidisciplinary framework to guide future research toward improved prevention and treatment. The NAKO is funded by the Federal Ministry of Education and Research (BMBF) [project funding reference numbers: 01ER1301A/B/C, 01ER1511D, 01ER1801A/B/C/D and 01ER2301A/B/C], federal states of Germany and the Helmholtz Association, the participating universities and the institutes of the Leibniz Association.

A Minimal Annotation Pipeline for Deep Learning Segmentation of Skeletal Muscles.

Baudin PY, Balsiger F, Beck L, Boisserie JM, Jouan S, Marty B, Reyngoudt H, Scheidegger O

pubmed logopapersJul 1 2025
Translating quantitative skeletal muscle MRI biomarkers into clinics requires efficient automatic segmentation methods. The purpose of this work is to investigate a simple yet effective iterative methodology for building a high-quality automatic segmentation model while minimizing the manual annotation effort. We used a retrospective database of quantitative MRI examinations (n = 70) of healthy and pathological thighs for training a nnU-Net segmentation model. Healthy volunteers and patients with various neuromuscular diseases, broadly categorized as dystrophic, inflammatory, neurogenic, and unlabeled NMDs. We designed an iterative procedure, progressively adding cases to the training set and using a simple visual five-level rating scale to judge the validity of generated segmentations for clinical use. On an independent test set (n = 20), we assessed the quality of the segmentation in 13 individual thigh muscles using standard segmentation metrics-dice coefficient (DICE) and 95% Hausdorff distance (HD95)-and quantitative biomarkers-cross-sectional area (CSA), fat fraction (FF), and water-T1/T2. We obtained high-quality segmentations (DICE = 0.88 ± 0.15/0.86 ± 0.14, HD95 = 6.35 ± 12.33/6.74 ± 11.57 mm), comparable to recent works, although with a smaller training set (n = 30). Inter-rater agreement on the five-level scale was fair to moderate but showed progressive improvement of the segmentation model along with the iterations. We observed limited differences from manually delineated segmentations on the quantitative outcomes (MAD: CSA = 65.2 mm<sup>2</sup>, FF = 1%, water-T1 = 8.4 ms, water-T2 = 0.35 ms), with variability comparable to manual delineations.

The implementation of artificial intelligence in serial monitoring of post gamma knife vestibular schwannomas: A pilot study.

Singh M, Jester N, Lorr S, Briano A, Schwartz N, Mahajan A, Chiang V, Tommasini SM, Wiznia DH, Buono FD

pubmed logopapersJul 1 2025
Vestibular schwannomas (VS) are benign tumors that can lead to hearing loss, balance issues, and tinnitus. Gamma Knife Radiosurgery (GKS) is a common treatment for VS, aimed at halting tumor growth and preserving neurological function. Accurate monitoring of VS volume before and after GKS is essential for assessing treatment efficacy. To evaluate the accuracy of an artificial intelligence (AI) algorithm, originally developed to identify NF2-SWN-related VS, in segmenting non-NF2-SWN-related VS and detecting volume changes pre- and post-GKS. We hypothesize this AI algorithm, trained on NF2-SWN-related VS data, will accurately apply to non-NF2-SWN VS and VS treated with GKS. In this retrospective cohort study, we reviewed data from an established Gamma Knife database, identifying 16 patients who underwent GKS for VS and had pre- and post-GKS scans. Contrast-enhanced T1-weighted MRI scans were analyzed with both manual segmentation and the AI algorithm. DICE similarity coefficients were computed to compare AI and manual segmentations, and a paired t-test was used to assess statistical significance. Volume changes for pre- and post-GKS scans were calculated for both segmentation methods. The mean DICE score between AI and manual segmentations was 0.91 (range 0.79-0.97). Pre- and post-GKS DICE scores were 0.91 (range 0.79-0.97) and 0.92 (range 0.81-0.97), indicating high spatial overlap. AI-segmented VS volumes pre- and post-GKS were consistent with manual measurements, with high DICE scores indicating strong spatial overlap. The AI algorithm processed scans within 5 min, suggesting it offers a reliable, efficient alternative for clinical monitoring. DICE scores showed high similarity between manual and AI segmentations. The pre- and post-GKS VS volume percentage changes were also similar between manual and AI-segmented VS volumes, indicating that our AI algorithm can accurately detect changes in tumor growth.

SegQC: a segmentation network-based framework for multi-metric segmentation quality control and segmentation error detection in volumetric medical images.

Specktor-Fadida B, Ben-Sira L, Ben-Bashat D, Joskowicz L

pubmed logopapersJul 1 2025
Quality control (QC) of structures segmentation in volumetric medical images is important for identifying segmentation errors in clinical practice and for facilitating model development by enhancing network performance in semi-supervised and active learning scenarios. This paper introduces SegQC, a novel framework for segmentation quality estimation and segmentation error detection. SegQC computes an estimate measure of the quality of a segmentation in volumetric scans and in their individual slices and identifies possible segmentation error regions within a slice. The key components of SegQC include: 1) SegQCNet, a deep network that inputs a scan and its segmentation mask and outputs segmentation error probabilities for each voxel in the scan; 2) three new segmentation quality metrics computed from the segmentation error probabilities; 3) a new method for detecting possible segmentation errors in scan slices computed from the segmentation error probabilities. We introduce a novel evaluation scheme to measure segmentation error discrepancies based on an expert radiologist's corrections of automatically produced segmentations that yields smaller observer variability and is closer to actual segmentation errors. We demonstrate SegQC on three fetal structures in 198 fetal MRI scans - fetal brain, fetal body and the placenta. To assess the benefits of SegQC, we compare it to the unsupervised Test Time Augmentation (TTA)-based QC and to supervised autoencoder (AE)-based QC. Our studies indicate that SegQC outperforms TTA-based quality estimation for whole scans and individual slices in terms of Pearson correlation and MAE for fetal body and fetal brain structures segmentation as well as for volumetric overlap metrics estimation of the placenta structure. Compared to both unsupervised TTA and supervised AE methods, SegQC achieves lower MAE for both 3D and 2D Dice estimates and higher Pearson correlation for volumetric Dice. Our segmentation error detection method achieved recall and precision rates of 0.77 and 0.48 for fetal body, and 0.74 and 0.55 for fetal brain segmentation error detection, respectively. Ranking derived from metrics estimation surpasses rankings based on entropy and sum for TTA and SegQCNet estimations, respectively. SegQC provides high-quality metrics estimation for both 2D and 3D medical images as well as error localization within slices, offering important improvements to segmentation QC.

Application and optimization of the U-Net++ model for cerebral artery segmentation based on computed tomographic angiography images.

Kim H, Seo KH, Kim K, Shim J, Lee Y

pubmed logopapersJul 1 2025
Accurate segmentation of cerebral arteries on computed tomography angiography (CTA) images is essential for the diagnosis and management of cerebrovascular diseases, including ischemic stroke. This study implemented a deep learning-based U-Net++ model for cerebral artery segmentation in CTA images, focusing on optimizing pruning levels by analyzing the trade-off between segmentation performance and computational cost. Dual-energy CTA and direct subtraction CTA datasets were utilized to segment the internal carotid and vertebral arteries in close proximity to the bone. We implemented four pruning levels (L1-L4) in the U-Net++ model and evaluated the segmentation performance using accuracy, intersection over union, F1-score, boundary F1-score, and Hausdorff distance. Statistical analyses were conducted to assess the significance of segmentation performance differences across pruning levels. In addition, we measured training and inference times to evaluate the trade-off between segmentation performance and computational efficiency. Applying deep supervision improved segmentation performance across all factors. While the L4 pruning level achieved the highest segmentation performance, L3 significantly reduced training and inference times (by an average of 51.56 % and 22.62 %, respectively), while incurring only a small decrease in segmentation performance (7.08 %) compared to L4. These results suggest that L3 achieves an optimal balance between performance and computational cost. This study demonstrates that pruning levels in U-Net++ models can be optimized to reduce computational cost while maintaining effective segmentation performance. By simplifying deep learning models, this approach can improve the efficiency of cerebrovascular segmentation, contributing to faster and more accurate diagnoses in clinical settings.
Page 1 of 44436 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.