Sort by:
Page 74 of 1341333 results

SegQC: a segmentation network-based framework for multi-metric segmentation quality control and segmentation error detection in volumetric medical images.

Specktor-Fadida B, Ben-Sira L, Ben-Bashat D, Joskowicz L

pubmed logopapersJul 1 2025
Quality control (QC) of structures segmentation in volumetric medical images is important for identifying segmentation errors in clinical practice and for facilitating model development by enhancing network performance in semi-supervised and active learning scenarios. This paper introduces SegQC, a novel framework for segmentation quality estimation and segmentation error detection. SegQC computes an estimate measure of the quality of a segmentation in volumetric scans and in their individual slices and identifies possible segmentation error regions within a slice. The key components of SegQC include: 1) SegQCNet, a deep network that inputs a scan and its segmentation mask and outputs segmentation error probabilities for each voxel in the scan; 2) three new segmentation quality metrics computed from the segmentation error probabilities; 3) a new method for detecting possible segmentation errors in scan slices computed from the segmentation error probabilities. We introduce a novel evaluation scheme to measure segmentation error discrepancies based on an expert radiologist's corrections of automatically produced segmentations that yields smaller observer variability and is closer to actual segmentation errors. We demonstrate SegQC on three fetal structures in 198 fetal MRI scans - fetal brain, fetal body and the placenta. To assess the benefits of SegQC, we compare it to the unsupervised Test Time Augmentation (TTA)-based QC and to supervised autoencoder (AE)-based QC. Our studies indicate that SegQC outperforms TTA-based quality estimation for whole scans and individual slices in terms of Pearson correlation and MAE for fetal body and fetal brain structures segmentation as well as for volumetric overlap metrics estimation of the placenta structure. Compared to both unsupervised TTA and supervised AE methods, SegQC achieves lower MAE for both 3D and 2D Dice estimates and higher Pearson correlation for volumetric Dice. Our segmentation error detection method achieved recall and precision rates of 0.77 and 0.48 for fetal body, and 0.74 and 0.55 for fetal brain segmentation error detection, respectively. Ranking derived from metrics estimation surpasses rankings based on entropy and sum for TTA and SegQCNet estimations, respectively. SegQC provides high-quality metrics estimation for both 2D and 3D medical images as well as error localization within slices, offering important improvements to segmentation QC.

The implementation of artificial intelligence in serial monitoring of post gamma knife vestibular schwannomas: A pilot study.

Singh M, Jester N, Lorr S, Briano A, Schwartz N, Mahajan A, Chiang V, Tommasini SM, Wiznia DH, Buono FD

pubmed logopapersJul 1 2025
Vestibular schwannomas (VS) are benign tumors that can lead to hearing loss, balance issues, and tinnitus. Gamma Knife Radiosurgery (GKS) is a common treatment for VS, aimed at halting tumor growth and preserving neurological function. Accurate monitoring of VS volume before and after GKS is essential for assessing treatment efficacy. To evaluate the accuracy of an artificial intelligence (AI) algorithm, originally developed to identify NF2-SWN-related VS, in segmenting non-NF2-SWN-related VS and detecting volume changes pre- and post-GKS. We hypothesize this AI algorithm, trained on NF2-SWN-related VS data, will accurately apply to non-NF2-SWN VS and VS treated with GKS. In this retrospective cohort study, we reviewed data from an established Gamma Knife database, identifying 16 patients who underwent GKS for VS and had pre- and post-GKS scans. Contrast-enhanced T1-weighted MRI scans were analyzed with both manual segmentation and the AI algorithm. DICE similarity coefficients were computed to compare AI and manual segmentations, and a paired t-test was used to assess statistical significance. Volume changes for pre- and post-GKS scans were calculated for both segmentation methods. The mean DICE score between AI and manual segmentations was 0.91 (range 0.79-0.97). Pre- and post-GKS DICE scores were 0.91 (range 0.79-0.97) and 0.92 (range 0.81-0.97), indicating high spatial overlap. AI-segmented VS volumes pre- and post-GKS were consistent with manual measurements, with high DICE scores indicating strong spatial overlap. The AI algorithm processed scans within 5 min, suggesting it offers a reliable, efficient alternative for clinical monitoring. DICE scores showed high similarity between manual and AI segmentations. The pre- and post-GKS VS volume percentage changes were also similar between manual and AI-segmented VS volumes, indicating that our AI algorithm can accurately detect changes in tumor growth.

A Minimal Annotation Pipeline for Deep Learning Segmentation of Skeletal Muscles.

Baudin PY, Balsiger F, Beck L, Boisserie JM, Jouan S, Marty B, Reyngoudt H, Scheidegger O

pubmed logopapersJul 1 2025
Translating quantitative skeletal muscle MRI biomarkers into clinics requires efficient automatic segmentation methods. The purpose of this work is to investigate a simple yet effective iterative methodology for building a high-quality automatic segmentation model while minimizing the manual annotation effort. We used a retrospective database of quantitative MRI examinations (n = 70) of healthy and pathological thighs for training a nnU-Net segmentation model. Healthy volunteers and patients with various neuromuscular diseases, broadly categorized as dystrophic, inflammatory, neurogenic, and unlabeled NMDs. We designed an iterative procedure, progressively adding cases to the training set and using a simple visual five-level rating scale to judge the validity of generated segmentations for clinical use. On an independent test set (n = 20), we assessed the quality of the segmentation in 13 individual thigh muscles using standard segmentation metrics-dice coefficient (DICE) and 95% Hausdorff distance (HD95)-and quantitative biomarkers-cross-sectional area (CSA), fat fraction (FF), and water-T1/T2. We obtained high-quality segmentations (DICE = 0.88 ± 0.15/0.86 ± 0.14, HD95 = 6.35 ± 12.33/6.74 ± 11.57 mm), comparable to recent works, although with a smaller training set (n = 30). Inter-rater agreement on the five-level scale was fair to moderate but showed progressive improvement of the segmentation model along with the iterations. We observed limited differences from manually delineated segmentations on the quantitative outcomes (MAD: CSA = 65.2 mm<sup>2</sup>, FF = 1%, water-T1 = 8.4 ms, water-T2 = 0.35 ms), with variability comparable to manual delineations.

Intermuscular adipose tissue and lean muscle mass assessed with MRI in people with chronic back pain in Germany: a retrospective observational study.

Ziegelmayer S, Häntze H, Mertens C, Busch F, Lemke T, Kather JN, Truhn D, Kim SH, Wiestler B, Graf M, Kader A, Bamberg F, Schlett CL, Weiss JB, Schulz-Menger J, Ringhof S, Can E, Pischon T, Niendorf T, Lammert J, Schulze M, Keil T, Peters A, Hadamitzky M, Makowski MR, Adams L, Bressem K

pubmed logopapersJul 1 2025
Chronic back pain (CBP) affects over 80 million people in Europe, contributing to substantial healthcare costs and disability. Understanding modifiable risk factors, such as muscle composition, may aid in prevention and treatment. This study investigates the association between lean muscle mass (LMM) and intermuscular adipose tissue (InterMAT) with CBP using noninvasive whole-body magnetic resonance imaging (MRI). This cross-sectional analysis used whole-body MRI data from 30,868 participants in the German National Cohort (NAKO), collected between 1 May 2014 and 1 September 2019. CBP was defined as back pain persisting >3 months. LMM and InterMAT were quantified via MRI-based muscle segmentations using a validated deep learning model. Associations were analyzed using mixed logistic regression, adjusting for age, sex, diabetes, dyslipidemia, osteoporosis, osteoarthritis, physical activity, and study site. Among 27,518 participants (n = 12,193/44.3% female, n = 14,605/55.7% male; median age 49 years IQR 41; 57), 21.8% (n = 6003; n = 2999/50.0% female, n = 3004/50% male; median age 53 years IQR 46; 60) reported CBP, compared to 78.2% (n = 21,515; n = 9194/42.7% female, n = 12,321/57.3% male; median age 48 years IQR 39; 56) who did not. CBP prevalence was highest in those with low (<500 MET min/week) or high (>5000 MET min/week) self-reported physical activity levels (24.6% (n = 10,892) and 22.0% (n = 3800), respectively) compared to moderate (500-5000 MET min/week) levels (19.4% (n = 12,826); p < 0.0001). Adjusted analyses revealed that a higher InterMAT (OR 1.22 per 2-unit Z-score; 95% CI 1.13-1.30; p < 0.0001) was associated with an increased likelihood of chronic back pain (CBP), whereas higher lean muscle mass (LMM) (OR 0.87 per 2-unit Z-score; 95% CI 0.79-0.95; p = 0.003) was associated with a reduced likelihood of CBP. Stratified analyses confirmed these associations persisted in individuals with osteoarthritis (OA-CBP LMM: 22.9 cm<sup>3</sup>/kg/m; InterMAT: 7.53% vs OA-No CBP LMM: 24.3 cm<sup>3</sup>/kg/m; InterMAT: 6.96% both p < 0.0001) and osteoporosis (OP-CBP LMM: 20.9 cm<sup>3</sup>/kg/m; InterMAT: 8.43% vs OP-No CBP LMM: 21.3 cm<sup>3</sup>/kg/m; InterMAT: 7.9% p = 0.16 and p = 0.0019). Higher pain intensity (Pain Intensity Numerical Rating Scale ≥4) correlated with lower LMM (2-unit Z-score deviation = OR, 0.63; 95% CI, 0.57-0.70; p < 0.0001) and higher InterMAT (2-unit Z-score deviation = OR, 1.22; 95% CI, 1.13-1.30; p < 0.0001), independent of physical activity, osteoporosis and osteoarthritis. This large, population-based study highlights the associations of InterMAT and LMM with CBP. Given the limitations of the cross-sectional design, our findings can be seen as an impetus for further causal investigations within a broader, multidisciplinary framework to guide future research toward improved prevention and treatment. The NAKO is funded by the Federal Ministry of Education and Research (BMBF) [project funding reference numbers: 01ER1301A/B/C, 01ER1511D, 01ER1801A/B/C/D and 01ER2301A/B/C], federal states of Germany and the Helmholtz Association, the participating universities and the institutes of the Leibniz Association.

Mechanically assisted non-invasive ventilation for liver SABR: Improve CBCT, treat more accurately.

Pierrard J, Audag N, Massih CA, Garcia MA, Moreno EA, Colot A, Jardinet S, Mony R, Nevez Marques AF, Servaes L, Tison T, den Bossche VV, Etume AW, Zouheir L, Ooteghem GV

pubmed logopapersJul 1 2025
Cone-beam computed tomography (CBCT) for image-guided radiotherapy (IGRT) during liver stereotactic ablative radiotherapy (SABR) is degraded by respiratory motion artefacts, potentially jeopardising treatment accuracy. Mechanically assisted non-invasive ventilation-induced breath-hold (MANIV-BH) can reduce these artefacts. This study compares MANIV-BH and free-breathing CBCTs regarding image quality, IGRT variability, automatic registration accuracy, and deep-learning auto-segmentation performance. Liver SABR CBCTs were presented blindly to 14 operators: 25 patients with FB and 25 with MANIV-BH. They rated CBCT quality and IGRT ease (rigid registration with planning CT). Interoperator IGRT variability was compared between FB and MANIV-BH. Automatic gross tumour volume (GTV) mapping accuracy was compared using automatic rigid registration and image-guided deformable registration. Deep-learning organ-at-risk (OAR) auto-segmentation was rated by an operator, who recorded the time dedicated for manual correction of these volumes. MANIV-BH significantly improved CBCT image quality ("Excellent"/"Good": 83.4 % versus 25.4 % with FB, p < 0.001), facilitated IGRT ("Very easy"/"Easy": 68.0 % versus 38.9 % with FB, p < 0.001), and reduced IGRT variability, particularly for trained operators (overall variability of 3.2 mm versus 4.6 mm with FB, p = 0.010). MANIV-BH improved deep-learning auto-segmentation performance (80.0 % rated "Excellent"/"Good" versus 4.0 % with FB, p < 0.001), and reduced median manual correction time by 54.2 % compared to FB (p < 0.001). However, automatic GTV mapping accuracy was not significantly different between MANIV-BH and FB. In liver SABR, MANIV-BH significantly improves CBCT quality, reduces interoperator IGRT variability, and enhances OAR auto-segmentation. Beyond being safe and effective for respiratory motion mitigation, MANIV increases accuracy during treatment delivery, although its implementation requires resources.

[A deep learning method for differentiating nasopharyngeal carcinoma and lymphoma based on MRI].

Tang Y, Hua H, Wang Y, Tao Z

pubmed logopapersJul 1 2025
<b>Objective:</b>To development a deep learning(DL) model based on conventional MRI for automatic segmentation and differential diagnosis of nasopharyngeal carcinoma(NPC) and nasopharyngeal lymphoma(NPL). <b>Methods:</b>The retrospective study included 142 patients with NPL and 292 patients with NPC who underwent conventional MRI at Renmin Hospital of Wuhan University from June 2012 to February 2023. MRI from 80 patients were manually segmented to train the segmentation model. The automatically segmented regions of interest(ROIs) formed four datasets: T1 weighted images(T1WI), T2 weighted images(T2WI), T1 weighted contrast-enhanced images(T1CE), and a combination of T1WI and T2WI. The ImageNet-pretrained ResNet101 model was fine-tuned for the classification task. Statistical analysis was conducted using SPSS 22.0. The Dice coefficient loss was used to evaluate performance of segmentation task. Diagnostic performance was assessed using receiver operating characteristic(ROC) curves. Gradient-weighted class activation mapping(Grad-CAM) was imported to visualize the model's function. <b>Results:</b>The DICE score of the segmentation model reached 0.876 in the testing set. The AUC values of classification models in testing set were as follows: T1WI: 0.78(95%<i>CI</i> 0.67-0.81), T2WI: 0.75(95%<i>CI</i> 0.72-0.86), T1CE: 0.84(95%<i>CI</i> 0.76-0.87), and T1WI+T2WI: 0.93(95%<i>CI</i> 0.85-0.94). The AUC values for the two clinicians were 0.77(95%<i>CI</i> 0.72-0.82) for the junior, and 0.84(95%<i>CI</i> 0.80-0.89) for the senior. Grad-CAM analysis revealed that the central region of the tumor was highly correlated with the model's classification decisions, while the correlation was lower in the peripheral regions. <b>Conclusion:</b>The deep learning model performed well in differentiating NPC from NPL based on conventional MRI. The T1WI+T2WI combination model exhibited the best performance. The model can assist in the early diagnosis of NPC and NPL, facilitating timely and standardized treatment, which may improve patient prognosis.

Deep learning for automated segmentation of radiation-induced changes in cerebral arteriovenous malformations following radiosurgery.

Ho HH, Yang HC, Yang WX, Lee CC, Wu HM, Lai IC, Chen CJ, Peng SJ

pubmed logopapersJul 1 2025
Despite the widespread use of stereotactic radiosurgery (SRS) to treat cerebral arteriovenous malformations (AVMs), this procedure can lead to radiation-induced changes (RICs) in the surrounding brain tissue. Volumetric assessment of RICs is crucial for therapy planning and monitoring. RICs that appear as hyper-dense areas in magnetic resonance T2-weighted (T2w) images are clearly identifiable; however, physicians lack tools for the segmentation and quantification of these areas. This paper presents an algorithm to calculate the volume of RICs in patients with AVMs following SRS. The algorithm could be used to predict the course of RICs and facilitate clinical management. We trained a Mask Region-based Convolutional Neural Network (Mask R-CNN) as an alternative to manual pre-processing in designating regions of interest. We also applied transfer learning to the DeepMedic deep learning model to facilitate the automatic segmentation and quantification of AVM edema regions in T2w images. The resulting quantitative findings were used to explore the effects of SRS treatment among 28 patients with unruptured AVMs based on 139 regularly tracked T2w scans. The actual range of RICs in the T2w images was labeled manually by a clinical radiologist to serve as the gold standard in supervised learning. The trained model was tasked with segmenting the test set for comparison with the manual segmentation results. The average Dice similarity coefficient in these comparisons was 71.8%. The proposed segmentation algorithm achieved results on par with conventional manual calculations in determining the volume of RICs, which were shown to peak at the end of the first year after SRS and then gradually decrease. These findings have the potential to enhance clinical decision-making. Not applicable.

Convolutional neural network for maxillary sinus segmentation based on the U-Net architecture at different planes in the Chinese population: a semantic segmentation study.

Chen J

pubmed logopapersJul 1 2025
The development of artificial intelligence has revolutionized the field of dentistry. Medical image segmentation is a vital part of AI applications in dentistry. This technique can assist medical practitioners in accurately diagnosing diseases. The detection of the maxillary sinus (MS), such as dental implants, tooth extraction, and endoscopic surgery, is important in the surgical field. The accurate segmentation of MS in radiological images is a prerequisite for diagnosis and treatment planning. This study aims to investigate the feasibility of applying a CNN algorithm based on the U-Net architecture to facilitate MS segmentation of individuals from the Chinese population. A total of 300 CBCT images in the axial, coronal, and sagittal planes were used in this study. These images were divided into a training set and a test set at a ratio of 8:2. The marked regions (maxillary sinus) were labelled for training and testing in the original images. The training process was performed for 40 epochs using a learning rate of 0.00001. Computation was performed on an RTX GeForce 3060 GPU. The best model was retained for predicting MS in the test set and calculating the model parameters. The trained U-Net model achieved yield segmentation accuracy across the three imaging planes. The IoU values were 0.942, 0.937 and 0.916 in the axial, sagittal and coronal planes, respectively, with F1 scores across all planes exceeding 0.95. The accuracies of the U-Net model were 0.997, 0.998, and 0.995 in the axial, sagittal and coronal planes, respectively. The trained U-Net model achieved highly accurate segmentation of MS across three planes on the basis of 2D CBCT images among the Chinese population. The AI model has shown promising application potential for daily clinical practice. Not applicable.

An AI-based tool for prosthetic crown segmentation serving automated intraoral scan-to-CBCT registration in challenging high artifact scenarios.

Elgarba BM, Ali S, Fontenele RC, Meeus J, Jacobs R

pubmed logopapersJul 1 2025
Accurately registering intraoral and cone beam computed tomography (CBCT) scans in patients with metal artifacts poses a significant challenge. Whether a cloud-based platform trained for artificial intelligence (AI)-driven segmentation can improve registration is unclear. The purpose of this clinical study was to validate a cloud-based platform trained for the AI-driven segmentation of prosthetic crowns on CBCT scans and subsequent multimodal intraoral scan-to-CBCT registration in the presence of high metal artifact expression. A dataset consisting of 30 time-matched maxillary and mandibular CBCT and intraoral scans, each containing at least 4 prosthetic crowns, was collected. CBCT acquisition involved placing cotton rolls between the cheeks and teeth to facilitate soft tissue delineation. Segmentation and registration were compared using either a semi-automated (SA) method or an AI-automated (AA). SA served as clinical reference, where prosthetic crowns and their radicular parts (natural roots or implants) were threshold-based segmented with point surface-based registration. The AA method included fully automated segmentation and registration based on AI algorithms. Quantitative assessment compared AA's median surface deviation (MSD) and root mean square (RMS) in crown segmentation and subsequent intraoral scan-to-CBCT registration with those of SA. Additionally, segmented crown STL files were voxel-wise analyzed for comparison between AA and SA. A qualitative assessment of AA-based crown segmentation evaluated the need for refinement, while the AA-based registration assessment scrutinized the alignment of the registered-intraoral scan with the CBCT teeth and soft tissue contours. Ultimately, the study compared the time efficiency and consistency of both methods. Quantitative outcomes were analyzed with the Kruskal-Wallis, Mann-Whitney, and Student t tests, and qualitative outcomes with the Wilcoxon test (all α=.05). Consistency was evaluated by using the intraclass correlation coefficient (ICC). Quantitatively, AA methods excelled with a 0.91 Dice Similarity Coefficient for crown segmentation and an MSD of 0.03 ±0.05 mm for intraoral scan-to-CBCT registration. Additionally, AA achieved 91% clinically acceptable matches of teeth and gingiva on CBCT scans, surpassing SA method's 80%. Furthermore, AA was significantly faster than SA (P<.05), being 200 times faster in segmentation and 4.5 times faster in registration. Both AA and SA exhibited excellent consistency in segmentation and registration, with ICC values of 0.99 and 1 for AA and 0.99 and 0.96 for SA, respectively. The novel cloud-based platform demonstrated accurate, consistent, and time-efficient prosthetic crown segmentation, as well as intraoral scan-to-CBCT registration in scenarios with high artifact expression.

Automated Fast Prediction of Bone Mineral Density From Low-dose Computed Tomography.

Zhou K, Xin E, Yang S, Luo X, Zhu Y, Zeng Y, Fu J, Ruan Z, Wang R, Geng D, Yang L

pubmed logopapersJul 1 2025
Low-dose chest CT (LDCT) is commonly employed for the early screening of lung cancer. However, it has rarely been utilized in the assessment of volumetric bone mineral density (vBMD) and the diagnosis of osteoporosis (OP). This study investigated the feasibility of using deep learning to establish a system for vBMD prediction and OP classification based on LDCT scans. This study included 551 subjects who underwent both LDCT and QCT examinations. First, the U-net was developed to automatically segment lumbar vertebrae from single 2D LDCT slices near the mid-vertebral level. Then, a prediction model was proposed to estimate vBMD, which was subsequently employed for detecting OP and osteopenia (OA). Specifically, two input modalities were constructed for the prediction model. The performance metrics of the models were calculated and evaluated. The segmentation model exhibited a strong correlation with manual segmentation, achieving a mean Dice similarity coefficient (DSC) of 0.974, sensitivity of 0.964, positive predictive value (PPV) of 0.985, and Hausdorff distance of 3.261 in the test set. Linear regression and Bland-Altman analysis demonstrated strong agreement between the predicted vBMD from two-channel inputs and QCT-derived vBMD, with a root mean square error of 8.958 mg/mm<sup>3</sup> and an R<sup>2</sup> of 0.944. The areas under the curve for detecting OP and OA were 0.800 and 0.878, respectively, with an overall accuracy of 94.2%. The average processing time for this system was 1.5 s. This prediction system could automatically estimate vBMD and detect OP and OA on LDCT scans, providing great potential for the osteoporosis screening.
Page 74 of 1341333 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.