Sort by:
Page 99 of 1331328 results

Magnetic resonance imaging and the evaluation of vestibular schwannomas: a systematic review

Lee, K. S., Wijetilake, N., Connor, S., Vercauteren, T., Shapey, J.

medrxiv logopreprintJun 6 2025
IntroductionThe assessment of vestibular schwannoma (VS) requires a standardized measurement approach as growth is a key element in defining treatment strategy for VS. Volumetric measurements offer higher sensitivity and precision, but existing methods of segmentation, are labour-intensive, lack standardisation and are prone to variability and subjectivity. A new core set of measurement indicators reported consistently, will support clinical decision-making and facilitate evidence synthesis. This systematic review aimed to identify indicators used in 1) magnetic resonance imaging (MRI) acquisition and 2) measurement or 3) growth of VS. This work is expected to inform a Delphi consensus. MethodsSystematic searches of Medline, Embase and Cochrane Central were undertaken on 4th October 2024. Studies that assessed the evaluation of VS with MRI, between 2014 and 2024 were included. ResultsThe final dataset consisted of 102 studies and 19001 patients. Eighty-six (84.3%) studies employed post contrast T1 as the MRI acquisition of choice for evaluating VS. Nine (8.8%) studies additionally employed heavily weighted T2 sequences such as constructive interference in steady state (CISS) and FIESTA-C. Only 45 (44.1%) studies reported the slice thickness with the majority 38 (84.4%) choosing <3mm in thickness. Fifty-eight (56.8%) studies measured volume whilst 49 (48.0%) measured the largest linear dimension; 14 (13.7%) studies used both measurements. Four studies employed semi-automated or automated segmentation processes to measure the volumes of VS. Of 68 studies investigating growth, 54 (79.4%) provided a threshold. Significant variation in volumetric growth was observed but the threshold for significant percentage change reported by most studies was 20% (n = 18). ConclusionSubstantial variation in MRI acquisition, and methods for evaluating measurement and growth of VS, exists across the literature. This lack of standardization is likely attributed to resource constraints and the fact that currently available volumetric segmentation methods are very labour-intensive. Following the identification of the indicators employed in the literature, this study aims to develop a Delphi consensus for the standardized measurement of VS and uptake in employing a data-driven artificial intelligence-based measuring tools.

[Albumin-myoestatosis gauge assisted by an artificial intelligence tool as a prognostic factor in patients with metastatic colorectal-cancer].

de Luis Román D, Primo D, Izaola Jáuregui O, Sánchez Lite I, López Gómez JJ

pubmed logopapersJun 6 2025
to evaluate the prognostic role of the marker albumin-myosteatosis (MAM) in Caucasian patients with metastatic colorectal cancer. this study involved 55 consecutive Caucasian patients diagnosed with metastatic colorectal cancer. CT scans at the L3 vertebral level were analyzed to determine skeletal muscle cross-sectional area, skeletal muscle index (SMI), and skeletal muscle density (SMD). Bioelectrical impedance analysis (BIA) (phase angle, reactance, resistance, and SMI-BIA) was used. Albumin and prealbumin were measured. The albumin-myosteatosis marker (AMM = serum albumin (g/dL) × skeletal muscle density (SMD) in Hounsfield units (HU) was calculated. Survival was estimated using the Kaplan-Meier method and comparisons between groups were performed using the log-rank test. the median age was 68.1 ± 9.1 years. Patients were divided into two groups based on the median MAM (129.1 AU for women and 156.3 AU for men). Patients in the low MAM group had significantly reduced values of phase angle and reactance, as well as older age. These patients also had higher rates of malnutrition by GLIM criteria (odds ratio: 3.8; 95 % CI = 1.2-12.9), low muscle mass diagnosed with TC (odds ratio: 3.6; 95 % CI = 1.2-10.9) and mortality (odds ratio: 9.82; 95 % CI = 1.2-10.9). The Kaplan-Meir analysis demonstrated significant differences in 5-year survival between MAM groups (patients in the low median MAM group vs. patients in the high median MAM group), (HR: 6.2; 95 % CI = 1.10-37.5). the marker albumin-myosteatosis (MAM) may function as a prognostic marker of survival in Caucasian patients with metastatic CRC.

Hypothalamus and intracranial volume segmentation at the group level by use of a Gradio-CNN framework.

Vernikouskaya I, Rasche V, Kassubek J, Müller HP

pubmed logopapersJun 6 2025
This study aimed to develop and evaluate a graphical user interface (GUI) for the automated segmentation of the hypothalamus and intracranial volume (ICV) in brain MRI scans. The interface was designed to facilitate efficient and accurate segmentation for research applications, with a focus on accessibility and ease of use for end-users. We developed a web-based GUI using the Gradio library integrating deep learning-based segmentation models trained on annotated brain MRI scans. The model utilizes a U-Net architecture to delineate the hypothalamus and ICV. The GUI allows users to upload high-resolution MRI scans, visualize the segmentation results, calculate hypothalamic volume and ICV, and manually correct individual segmentation results. To ensure widespread accessibility, we deployed the interface using ngrok, allowing users to access the tool via a shared link. As an example for the universality of the approach, the tool was applied to a group of 90 patients with Parkinson's disease (PD) and 39 controls. The GUI demonstrated high usability and efficiency in segmenting the hypothalamus and the ICV, with no significant difference in normalized hypothalamic volume observed between PD patients and controls, consistent with previously published findings. The average processing time per patient volume was 18 s for the hypothalamus and 44 s for the ICV segmentation on a 6 GB NVidia GeForce GTX 1060 GPU. The ngrok-based deployment allowed for seamless access across different devices and operating systems, with an average connection time of less than 5 s. The developed GUI provides a powerful and accessible tool for applications in neuroimaging. The combination of the intuitive interface, accurate deep learning-based segmentation, and easy deployment via ngrok addresses the need for user-friendly tools in brain MRI analysis. This approach has the potential to streamline workflows in neuroimaging research.

CAN TRANSFER LEARNING IMPROVE SUPERVISED SEGMENTATIONOF WHITE MATTER BUNDLES IN GLIOMA PATIENTS?

Riccardi, C., Ghezzi, S., Amorosino, G., Zigiotto, L., Sarubbo, S., Jovicich, J., Avesani, P.

biorxiv logopreprintJun 6 2025
In clinical neuroscience, the segmentation of the main white matter bundles is propaedeutic for many tasks such as pre-operative neurosurgical planning and monitoring of neuro-related diseases. Automating bundle segmentation with data-driven approaches and deep learning models has shown promising accuracy in the context of healthy individuals. The lack of large clinical datasets is preventing the translation of these results to patients. Inference on patients data with models trained on healthy population is not effective because of domain shift. This study aims to carry out an empirical analysis to investigate how transfer learning might be beneficial to overcome these limitations. For our analysis, we consider a public dataset with hundreds of individuals and a clinical dataset of glioma patients. We focus our preliminary investigation on the corticospinal tract. The results show that transfer learning might be effective in partially overcoming the domain shift.

TissUnet: Improved Extracranial Tissue and Cranium Segmentation for Children through Adulthood

Markiian Mandzak, Elvira Yang, Anna Zapaishchykova, Yu-Hui Chen, Lucas Heilbroner, John Zielke, Divyanshu Tak, Reza Mojahed-Yazdi, Francesca Romana Mussa, Zezhong Ye, Sridhar Vajapeyam, Viviana Benitez, Ralph Salloum, Susan N. Chi, Houman Sotoudeh, Jakob Seidlitz, Sabine Mueller, Hugo J. W. L. Aerts, Tina Y. Poussaint, Benjamin H. Kann

arxiv logopreprintJun 6 2025
Extracranial tissues visible on brain magnetic resonance imaging (MRI) may hold significant value for characterizing health conditions and clinical decision-making, yet they are rarely quantified. Current tools have not been widely validated, particularly in settings of developing brains or underlying pathology. We present TissUnet, a deep learning model that segments skull bone, subcutaneous fat, and muscle from routine three-dimensional T1-weighted MRI, with or without contrast enhancement. The model was trained on 155 paired MRI-computed tomography (CT) scans and validated across nine datasets covering a wide age range and including individuals with brain tumors. In comparison to AI-CT-derived labels from 37 MRI-CT pairs, TissUnet achieved a median Dice coefficient of 0.79 [IQR: 0.77-0.81] in a healthy adult cohort. In a second validation using expert manual annotations, median Dice was 0.83 [IQR: 0.83-0.84] in healthy individuals and 0.81 [IQR: 0.78-0.83] in tumor cases, outperforming previous state-of-the-art method. Acceptability testing resulted in an 89% acceptance rate after adjudication by a tie-breaker(N=108 MRIs), and TissUnet demonstrated excellent performance in the blinded comparative review (N=45 MRIs), including both healthy and tumor cases in pediatric populations. TissUnet enables fast, accurate, and reproducible segmentation of extracranial tissues, supporting large-scale studies on craniofacial morphology, treatment effects, and cardiometabolic risk using standard brain T1w MRI.

Development of a Deep Learning Model for the Volumetric Assessment of Osteonecrosis of the Femoral Head on Three-Dimensional Magnetic Resonance Imaging.

Uemura K, Takashima K, Otake Y, Li G, Mae H, Okada S, Hamada H, Sugano N

pubmed logopapersJun 6 2025
Although volumetric assessment of necrotic lesions using the Steinberg classification predicts future collapse in osteonecrosis of the femoral head (ONFH), quantifying these lesions using magnetic resonance imaging (MRI) generally requires time and effort, allowing the Steinberg classification to be routinely used in clinical investigations. Thus, this study aimed to use deep learning to develop a method for automatically segmenting necrotic lesions using MRI and for automatically classifying them according to the Steinberg classification. A total of 63 hips from patients who had ONFH and did not have collapse were included. An orthopaedic surgeon manually segmented the femoral head and necrotic lesions on MRI acquired using a spoiled gradient-echo sequence. Based on manual segmentation, 22 hips were classified as Steinberg grade A, 23 as Steinberg grade B, and 18 as Steinberg grade C. The manually segmented labels were used to train a deep learning model that used a 5-layer Dynamic U-Net system. A four-fold cross-validation was performed to assess segmentation accuracy using the Dice coefficient (DC) and average symmetric distance (ASD). Furthermore, hip classification accuracy according to the Steinberg classification was evaluated along with the weighted Kappa coefficient. The median DC and ASD for the femoral head region were 0.95 (interquartile range [IQR], 0.95 to 0.96) and 0.65 mm (IQR, 0.59 to 0.75), respectively. For necrotic lesions, the median DC and ASD were 0.89 (IQR, 0.85 to 0.92) and 0.76 mm (IQR, 0.58 to 0.96), respectively. Based on the Steinberg classification, the grading matched in 59 hips (accuracy: 93.7%), with a weighted Kappa coefficient of 0.98. The proposed deep learning model exhibited high accuracy in segmenting and grading necrotic lesions according to the Steinberg classification using MRI. This model can be used to assist clinicians in the volumetric assessment of ONFH.

TissUnet: Improved Extracranial Tissue and Cranium Segmentation for Children through Adulthood

Markian Mandzak, Elvira Yang, Anna Zapaishchykova, Yu-Hui Chen, Lucas Heilbroner, John Zielke, Divyanshu Tak, Reza Mojahed-Yazdi, Francesca Romana Mussa, Zezhong Ye, Sridhar Vajapeyam, Viviana Benitez, Ralph Salloum, Susan N. Chi, Houman Sotoudeh, Jakob Seidlitz, Sabine Mueller, Hugo J. W. L. Aerts, Tina Y. Poussaint, Benjamin H. Kann

arxiv logopreprintJun 6 2025
Extracranial tissues visible on brain magnetic resonance imaging (MRI) may hold significant value for characterizing health conditions and clinical decision-making, yet they are rarely quantified. Current tools have not been widely validated, particularly in settings of developing brains or underlying pathology. We present TissUnet, a deep learning model that segments skull bone, subcutaneous fat, and muscle from routine three-dimensional T1-weighted MRI, with or without contrast enhancement. The model was trained on 155 paired MRI-computed tomography (CT) scans and validated across nine datasets covering a wide age range and including individuals with brain tumors. In comparison to AI-CT-derived labels from 37 MRI-CT pairs, TissUnet achieved a median Dice coefficient of 0.79 [IQR: 0.77-0.81] in a healthy adult cohort. In a second validation using expert manual annotations, median Dice was 0.83 [IQR: 0.83-0.84] in healthy individuals and 0.81 [IQR: 0.78-0.83] in tumor cases, outperforming previous state-of-the-art method. Acceptability testing resulted in an 89% acceptance rate after adjudication by a tie-breaker(N=108 MRIs), and TissUnet demonstrated excellent performance in the blinded comparative review (N=45 MRIs), including both healthy and tumor cases in pediatric populations. TissUnet enables fast, accurate, and reproducible segmentation of extracranial tissues, supporting large-scale studies on craniofacial morphology, treatment effects, and cardiometabolic risk using standard brain T1w MRI.

Query Nearby: Offset-Adjusted Mask2Former enhances small-organ segmentation

Xin Zhang, Dongdong Meng, Sheng Li

arxiv logopreprintJun 6 2025
Medical segmentation plays an important role in clinical applications like radiation therapy and surgical guidance, but acquiring clinically acceptable results is difficult. In recent years, progress has been witnessed with the success of utilizing transformer-like models, such as combining the attention mechanism with CNN. In particular, transformer-based segmentation models can extract global information more effectively, compensating for the drawbacks of CNN modules that focus on local features. However, utilizing transformer architecture is not easy, because training transformer-based models can be resource-demanding. Moreover, due to the distinct characteristics in the medical field, especially when encountering mid-sized and small organs with compact regions, their results often seem unsatisfactory. For example, using ViT to segment medical images directly only gives a DSC of less than 50\%, which is far lower than the clinically acceptable score of 80\%. In this paper, we used Mask2Former with deformable attention to reduce computation and proposed offset adjustment strategies to encourage sampling points within the same organs during attention weights computation, thereby integrating compact foreground information better. Additionally, we utilized the 4th feature map in Mask2Former to provide a coarse location of organs, and employed an FCN-based auxiliary head to help train Mask2Former more quickly using Dice loss. We show that our model achieves SOTA (State-of-the-Art) performance on the HaNSeg and SegRap2023 datasets, especially on mid-sized and small organs.Our code is available at link https://github.com/earis/Offsetadjustment\_Background-location\_Decoder\_Mask2former.

Application of Mask R-CNN for automatic recognition of teeth and caries in cone-beam computerized tomography.

Ma Y, Al-Aroomi MA, Zheng Y, Ren W, Liu P, Wu Q, Liang Y, Jiang C

pubmed logopapersJun 6 2025
Deep convolutional neural networks (CNNs) are advancing rapidly in medical research, demonstrating promising results in diagnosis and prediction within radiology and pathology. This study evaluates the efficacy of deep learning algorithms for detecting and diagnosing dental caries using cone-beam computed tomography (CBCT) with the Mask R-CNN architecture while comparing various hyperparameters to enhance detection. A total of 2,128 CBCT images were divided into training and validation and test datasets in a 7:1:1 ratio. For the verification of tooth recognition, the data from the validation set were randomly selected for analysis. Three groups of Mask R-CNN networks were compared: A scratch-trained baseline using randomly initialized weights (R group); A transfer learning approach with models pre-trained on COCO for object detection (C group); A variant pre-trained on ImageNetfor for object detection (I group). All configurations maintained identical hyperparameter settings to ensure fair comparison. The deep learning model used ResNet-50 as the backbone network and was trained to 300epoch respectively. We assessed training loss, detection and training times, diagnostic accuracy, specificity, positive and negative predictive values, and coverage precision to compare performance across the groups. Transfer learning significantly reduced training times compared to non-transfer learning approach (p < 0.05). The average detection time for group R was 0.269 ± 0.176 s, whereas groups I (0.323 ± 0.196 s) and C (0.346 ± 0.195 s) exhibited significantly longer detection times (p < 0.05). C-group, trained for 200 epochs, achieved a mean average precision (mAP) of 81.095, outperforming all other groups. The mAP for caries recognition in group R, trained for 300 epochs, was 53.328, with detection times under 0.5 s. Overall, C-group demonstrated significantly higher average precision across all epochs (100, 200, and 300) (p < 0.05). Neural networks pre-trained with COCO transfer learning exhibit superior annotation accuracy compared to those pre-trained with ImageNet. This suggests that COCO's diverse and richly annotated images offer more relevant features for detecting dental structures and carious lesions. Furthermore, employing ResNet-50 as the backbone architecture enhances the detection of teeth and carious regions, achieving significant improvements with just 200 training epochs, potentially increasing the efficiency of clinical image interpretation.

A Fully Automatic Pipeline of Identification, Segmentation, and Subtyping of Aortic Dissection from CT Angiography.

Zhuang C, Wu Y, Qi Q, Zhao S, Sun Y, Hou J, Qian W, Yang B, Qi S

pubmed logopapersJun 6 2025
Aortic dissection (AD) is a rare condition with a high mortality rate, necessitating accurate and rapid diagnosis. This study develops an automated deep learning pipeline for identifying, segmenting, and Stanford subtyping AD using computed tomography angiography (CTA) images. This pipeline consists of four interconnected modules: aorta segmentation, AD identification, true lumen (TL) and false lumen (FL) segmentation, and Stanford subtyping. In the aorta segmentation module, a 3D full-resolution nnU-Net is trained. The segmented aorta's boundary is extracted using morphological operations and projected from multiple views in the AD identification module. AD identification is then performed using the multi-view projection data. For AD cases, a 3D nnU-Net is further trained for TL/FL segmentation based on the segmented aorta. Finally, a network is trained for Stanford subtyping using multi-view maximum density projections of the segmented TL/FL. A total of 386 CTA scans were collected for training, validation, and testing of the pipeline. For AD identification, the method achieved an accuracy of 0.979. The TL/FL segmentation for TypeA-AD and Type-B-AD achieved average Dice coefficient of 0.968 for TL and 0.971 for FL. For Stanford subtyping, the multi-view method achieved an accuracy of 0.990. The automated pipeline enables rapid and accurate identification, segmentation, and Stanford subtyping of AD using CTA images, potentially accelerating the diagnosis and treatment. The segmented aorta and TL/FL can also serve as references for physicians. The code, models, and pipeline are publicly available at https://github.com/zhuangCJ/A-pipeline-of-AD.git .
Page 99 of 1331328 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.