Sort by:
Page 31 of 1331328 results

Machine learning-based method for the detection of dextrocardia in ultrasound video clips.

Hernandez-Cruz N, Patey O, Salovic B, Papageorghiou A, Noble JA

pubmed logopapersAug 20 2025
Dextrocardia is a congenital anomaly arising during fetal development, characterised by the abnormal positioning of the heart on the right side of the chest, instead of its usual anatomical location on the left. This paper describes a machine learning-based method to automatically assess ultrasound (US) transverse videos to detect dextrocardia by analysing the Situs and four-chamber (4CH) views. The method processes ultrasound video sweeps that users capture, which include the Situs and 4CH views. The automated analysis method consists of three stages. First, four fetal anatomical structures (chest, spine, stomach and heart) are automatically segmented using SegFormer. Second, a quality assessment (QA) module verifies that the video includes informative frames. Thirdly, the orientation of the stomach and heart relative to the fetal chest (either right or left side) is determined to assess dextrocardia. The method utilises a Transformer-based segmentation model to perform segmentation of the fetal anatomy. Segmentation performance was evaluated using the Dice coefficient, and fetal anatomy centroid estimation accuracy using root mean squared error (RMSE). Dextrocardia was classified based on a frame-based classification score (FBCS). The datasets consist of 142 pairs of Situs and 4CH US (284 frames in total) for training; and 14 US videos (7 normal, 7 dextrocardia, 2,916 frames total) for testing. The method achieved a Dice score of 0.968, 0.958, 0.953, 0.949 for chest, spine, stomach and heart segmentation, respectively, and anatomy centroid RMSE of 0.23mm, 0.34mm, 0.25mm, 0.39mm for the same structures. The QA rejected 172 frames. The assessment for dextrocardia achieved a FBCS of 0.99 with a standard deviation of 0.01 for normal and 0.02 for dextrocardia videos. Our automated method demonstrates accurate segmentation and reliable detection of dextrocardia from US videos. Due to the simple acquisition protocol and its robust analytical pipeline, our method is suitable for healthcare providers who are non-cardiac experts. It has the potential to facilitate earlier and more consistent prenatal identification of dextrocardia during screening, particularly in settings with limited access to experts in fetal echocardiography.

A dataset of primary nasopharyngeal carcinoma MRI with multi-modalities segmentation.

Li Y, Chen Q, Li M, Si L, Guo Y, Xiong Y, Wang Q, Qin Y, Xu L, Smagt PV, Wang K, Tang J, Chen N

pubmed logopapersAug 20 2025
Multi-modality magnetic resonance imaging(MRI) data facilitate the early diagnosis, tumor segmentation, and disease staging in the management of nasopharyngeal carcinoma (NPC). The lack of publicly available, comprehensive datasets limits advancements in diagnosis, treatment planning, and the development of machine learning algorithms for NPC. Addressing this critical need, we introduce the first comprehensive NPC MRI dataset, encompassing MR axial imaging of 277 primary NPC patients. This dataset includes T1-weighted, T2-weighted, and contrast-enhanced T1-weighted sequences, totaling 831 scans. In addition to the corresponding clinical data, manually annotated and labeled segmentations by experienced radiologists offer high-quality data resources from untreated primary NPC.

Automated mitral valve segmentation in PLAX-view transthoracic echocardiography for anatomical assessment and risk stratification.

Jansen GE, Molenaar MA, Schuuring MJ, Bouma BJ, Išgum I

pubmed logopapersAug 20 2025
Accurate segmentation of the mitral valve in transthoracic echocardiography (TTE) enables the extraction of various anatomical parameters that are important for guiding clinical management. However, manual mitral valve segmentation is time-consuming and prone to interobserver variability. To support robust automatic analysis of mitral valve anatomy, we propose a novel AI-based method for mitral valve segmentation and anatomical measurement extraction. We retrospectively collected a set of echocardiographic exams from 1756 consecutive patients with suspected coronary artery disease. For these patients, we retrieved expert-defined scores for mitral regurgitation (MR) severity and follow-up characteristics. PLAX-view videos were automatically identified, and the inside border of the mitral valve leaflets were manually segmented in 182 patients. To automatically segment mitral valve leaflets, we designed a deep neural network that takes a video frame and outputs a distance- and classification-map for each leaflet, supervised by manual segmentations. From the resulting automatic segmentations, we extracted leaflet length, annulus diameter, tenting area, and coaptation length. To demonstrate the clinical relevance of these automatically extracted measurements, we performed univariable and multivariable Cox Regression survival analysis, with the clinical endpoint defined as heart-failure hospitalization or all-cause mortality. We trained the segmentation model on annotated frames of 111 patients, and tested segmentation performance on a set of 71 patients. For the survival analysis, we included 1,117 patients (mean age 64.1 ± 12.4 years, 58% male, median follow-up 3.3 years). The trained model achieved an average surface distance of 0.89 mm, a Hausdorff distance of 3.34 mm, and a temporal consistency score of 97%. Additionally, leaflet coaptation was accurately detected in 93% of annotated frames. In univariable Cox regression, automated annulus diameter (>35 mm, hazard ratio (HR) = 2.38, p<0.001), tenting area (>2.4 cm<sup>2</sup>, HR = 2.48, p<0.001), tenting height (>10 mm, HR = 1.91, p<0.001), and coaptation length (>3 mm, HR = 1.53, p = 0.007) were significantly associated with the defined clinical endpoint. For reference, significant MR by expert assessment resulted in an HR of 2.31 (p<0.001). In multivariable Cox Regression analysis, automated annulus diameter and coaptation length predicted the defined endpoint as independent parameters (p = 0.03 and p = 0.05, respectively). Our method allows accurate segmentation of the mitral valve in TTE, and enables fully automated quantification of key measurements describing mitral valve anatomy. This has the potential to improve risk stratification for cardiac patients.

Attention-based deep learning network for predicting World Health Organization meningioma grade and Ki-67 expression based on magnetic resonance imaging.

Cheng X, Li H, Li C, Li J, Liu Z, Fan X, Lu C, Song K, Shen Z, Wang Z, Yang Q, Zhang J, Yin J, Qian C, You Y, Wang X

pubmed logopapersAug 20 2025
Preoperative assessment of World Health Organization (WHO) meningioma grading and Ki-67 expression is crucial for treatment strategies. We aimed to develop a fully automated attention-based deep learning network to predict WHO meningioma grading and Ki-67 expression. This retrospective study included 952 meningioma patients, divided into training (n = 542), internal validation (n = 96), and external test sets (n = 314). For each task, clinical, radiomics, and deep learning models were compared. We used no-new-Unet (nn-Unet) models to construct the segmentation network, followed by four classification models using ResNet50 or Swin Transformer architectures with 2D or 2.5D input strategies. All deep learning models incorporated attention mechanisms. Both the segmentation and 2.5D classification models demonstrated robust performance on the external test set. The segmentation network achieved Dice coefficients of 0.98 (0.97-0.99) and 0.87 (0.83-0.91) for brain parenchyma and tumour segmentation. For predicting meningioma grade, the 2.5D ResNet50 achieved the highest area under the curve (AUC) of 0.90 (0.85-0.93), significantly outperforming the clinical (AUC = 0.77 [0.70-0.83], p < 0.001) and radiomics models (AUC = 0.80 [0.75-0.85], p < 0.001). For Ki-67 expression prediction, the 2.5D Swin Transformer achieved the highest AUC of 0.89 (0.85-0.93), outperforming both the clinical (AUC = 0.76 [0.71-0.81], p < 0.001) and radiomics models (AUC = 0.82 [0.77-0.86], p = 0.002). Our automated deep learning network demonstrated superior performance. This novel network could support more precise treatment planning for meningioma patients. Question Can artificial intelligence accurately assess meningioma WHO grade and Ki-67 expression from preoperative MRI to guide personalised treatment and follow-up strategies? Findings The attention-enhanced nn-Unet segmentation achieved high accuracy, while 2.5D deep learning models with attention mechanisms achieved accurate prediction of grades and Ki-67. Clinical relevance Our fully automated 2.5D deep learning model, enhanced with attention mechanisms, accurately predicts WHO grades and Ki-67 expression levels in meningiomas, offering a robust, objective, and non-invasive solution to support clinical diagnosis and optimise treatment planning.

An MRI Atlas of the Human Fetal Brain: Reference and Segmentation Tools for Fetal Brain MRI Analysis

Mahdi Bagheri, Clemente Velasco-Annis, Jian Wang, Razieh Faghihpirayesh, Shadab Khan, Camilo Calixto, Camilo Jaimes, Lana Vasung, Abdelhakim Ouaalam, Onur Afacan, Simon K. Warfield, Caitlin K. Rollins, Ali Gholipour

arxiv logopreprintAug 20 2025
Accurate characterization of in-utero brain development is essential for understanding typical and atypical neurodevelopment. Building upon previous efforts to construct spatiotemporal fetal brain MRI atlases, we present the CRL-2025 fetal brain atlas, which is a spatiotemporal (4D) atlas of the developing fetal brain between 21 and 37 gestational weeks. This atlas is constructed from carefully processed MRI scans of 160 fetuses with typically-developing brains using a diffeomorphic deformable registration framework integrated with kernel regression on age. CRL-2025 uniquely includes detailed tissue segmentations, transient white matter compartments, and parcellation into 126 anatomical regions. This atlas offers significantly enhanced anatomical details over the CRL-2017 atlas, and is released along with the CRL diffusion MRI atlas with its newly created tissue segmentation and labels as well as deep learning-based multiclass segmentation models for fine-grained fetal brain MRI segmentation. The CRL-2025 atlas and its associated tools provide a robust and scalable platform for fetal brain MRI segmentation, groupwise analysis, and early neurodevelopmental research, and these materials are publicly released to support the broader research community.

Multicenter Validation of Automated Segmentation and Composition Analysis of Lumbar Paraspinal Muscles Using Multisequence MRI.

Zhang Z, Hides JA, De Martino E, Millner J, Tuxworth G

pubmed logopapersAug 20 2025
<i>"Just Accepted" papers have undergone full peer review and have been accepted for publication in <i>Radiology: Artificial Intelligence</i>. This article will undergo copyediting, layout, and proof review before it is published in its final version. Please note that during production of the final copyedited article, errors may be discovered which could affect the content</i>. Chronic low back pain is a global health issue with considerable socioeconomic burdens and is associated with changes in lumbar paraspinal muscles (LPM). In this retrospective study, a deep learning method was trained and externally validated for automated LPM segmentation, muscle volume quantification, and fatty infiltration assessment across multisequence MRIs. A total of 1,302 MRIs from 641 participants across five centers were included. Data from two centers were used for model training and tuning, while data from the remaining three centers were used for external testing. Model segmentation performance was evaluated against manual segmentation using the Dice similarity coefficient (DSC), and measurement accuracy was assessed using two one-sided tests and Intraclass Correlation Coefficients (ICCs). The model achieved global DSC values of 0.98 on the internal test set and 0.93 to 0.97 on external test sets. Statistical equivalence between automated and manual measurements of muscle volume and fat ratio was confirmed in most regions (<i>P</i> < .05). Agreement between automated and manual measurements was high (ICCs > 0.92). In conclusion, the proposed automated method accurately segmented LPM and demonstrated statistical equivalence to manual measurements of muscle volume and fatty infiltration ratio across multisequence, multicenter MRIs. ©RSNA, 2025.

ScarNet: A Novel Foundation Model for Automated Myocardial Scar Quantification from Late Gadolinium-Enhancement Images.

Tavakoli N, Rahsepar AA, Benefield BC, Shen D, López-Tapia S, Schiffers F, Goldberger JJ, Albert CM, Wu E, Katsaggelos AK, Lee DC, Kim D

pubmed logopapersAug 20 2025
Late Gadolinium Enhancement (LGE) imaging remains the gold standard for assessing myocardial fibrosis and scarring, with left ventricular (LV) LGE presence and extent serving as a predictor of major adverse cardiac events (MACE). Despite its clinical significance, LGE-based LV scar quantification is not used routinely due to the labor-intensive manual segmentation and substantial inter-observer variability. We developed ScarNet that synergistically combines a transformer-based encoder in Medical Segment Anything Model (MedSAM), which we fine-tuned with our dataset, and a convolution-based decoder in U-Net with tailored attention blocks to automatically segment myocardial scar boundaries while maintaining anatomical context. This network was trained and fine-tuned on an existing database of 401 ischemic cardiomyopathy patients (4,137 2D LGE images) with expert segmentation of myocardial and scar boundaries in LGE images, validated on 100 patients (1,034 2D LGE images) during training, and tested on unseen set of 184 patients (1,895 2D LGE images). Ablation studies were conducted to validate each architectural component's contribution. In 184 independent testing patients, ScarNet achieved accurate scar boundary segmentation (median DICE=0.912 [interquartile range (IQR): 0.863-0.944], concordance correlation coefficient [CCC]=0.963), significantly outperforming both MedSAM (median DICE=0.046 [IQR: 0.043-0.047], CCC=0.018) and nnU-Net (median DICE=0.638 [IQR: 0.604-0.661], CCC=0.734). For scar volume quantification, ScarNet demonstrated excellent agreement with manual analysis (CCC=0.995, percent bias=-0.63%, CoV=4.3%) compared to MedSAM (CCC=0.002, percent bias=-13.31%, CoV=130.3%) and nnU-Net (CCC=0.910, percent bias=-2.46%, CoV=20.3%). Similar trends were observed in the Monte Carlo simulations with noise perturbations. The overall accuracy was highest for SCARNet (sensitivity=95.3%; specificity=92.3%), followed by nnU-Net (sensitivity=74.9%; specificity=69.2%) and MedSAM (sensitivity=15.2%; specificity=92.3%). ScarNet outperformed MedSAM and nnU-Net for predicting myocardial and scar boundaries in LGE images of patients with ischemic cardiomyopathy. The Monte Carlo simulations demonstrated that ScarNet is less sensitive to noise perturbations than other tested networks.

Sarcopenia Assessment Using Fully Automated Deep Learning Predicts Cardiac Allograft Survival in Heart Transplant Recipients.

Lang FM, Liu J, Clerkin KJ, Driggin EA, Einstein AJ, Sayer GT, Takeda K, Uriel N, Summers RM, Topkara VK

pubmed logopapersAug 20 2025
Sarcopenia is associated with adverse outcomes in patients with end-stage heart failure. Muscle mass can be quantified via manual segmentation of computed tomography images, but this approach is time-consuming and subject to interobserver variability. We sought to determine whether fully automated assessment of radiographic sarcopenia by deep learning would predict heart transplantation outcomes. This retrospective study included 164 adult patients who underwent heart transplantation between January 2013 and December 2022. A deep learning-based tool was utilized to automatically calculate cross-sectional skeletal muscle area at the T11, T12, and L1 levels on chest computed tomography. Radiographic sarcopenia was defined as skeletal muscle index (skeletal muscle area divided by height squared) in the lowest sex-specific quartile. The study population had a mean age of 53±14 years and was predominantly male (75%) with a nonischemic cause (73%). Mean skeletal muscle index was 28.3±7.6 cm<sup>2</sup>/m<sup>2</sup> for females versus 33.1±8.1 cm<sup>2</sup>/m<sup>2</sup> for males (<i>P</i><0.001). Cardiac allograft survival was significantly lower in heart transplant recipients with versus without radiographic sarcopenia at T11 (90% versus 98% at 1 year, 83% versus 97% at 3 years, log-rank <i>P</i>=0.02). After multivariable adjustment, radiographic sarcopenia at T11 was associated with an increased risk of cardiac allograft loss or death (hazard ratio, 3.86 [95% CI, 1.35-11.0]; <i>P</i>=0.01). Patients with radiographic sarcopenia also had a significantly increased hospital length of stay (28 [interquartile range, 19-33] versus 20 [interquartile range, 16-31] days; <i>P</i>=0.046). Fully automated quantification of radiographic sarcopenia using pretransplant chest computed tomography successfully predicts cardiac allograft survival. By avoiding interobserver variability and accelerating computation, this approach has the potential to improve candidate selection and outcomes in heart transplantation.

Profiling disease experience in patients living with brain aneurysms by analyzing multimodal clinical data and quality of life measures.

Reder SR, Hardt J, Brockmann MA, Brockmann C, Kim S, Kawulycz M, Schulz M, Kantelhardt SR, Petrowski K, Fischbeck S

pubmed logopapersAug 20 2025
To explore the mental and physical health (MH, PH) on individuals living with brain aneurysms and to profile their differences in disease experience. In N = 111 patients the Short Form 36 Health Survey (SF-36) was assessed via an online survey; Supplementary data included angiography and magnetic resonance imaging (MRI) findings (including AI-based brain Lesion Volume analyses in ml, or LV). Correlation and regression analyses were conducted (including biological sex, age, overall brain LV, PH, MH). Disease profiles were determined using principal component analysis. Compared to the German normative cohort, patients exhibited overall lower SF-36 scores. In regression analyses, the DW was predictable by PH (β = 0.345) and MH (β=-0.646; R = 0.557; p < 0.001). Vasospasm severity correlated significantly with LV (r = 0.242, p = 0.043), MH (r=-0.321, p = 0.043), and PH (r=-0.372, p = 0.028). Higher LV were associated with poorer PH (r=-0.502, p = 0.001), unlike MH (p > 0.05). Main disease profiles were identified: (1) those with increased LV post-rupture (high DW); (2) older individuals with stable aneurysms (low DW); (3) revealing a sex disparity in QoL despite similar vasospasm severity; and 4), focused on chronic pain and its impact on daily tasks. Two sub-profiles highlighted trauma-induced impairments, functional disabilities from LV, and persistent anxiety. Reduced thalamic and pallidal volumes were linked to low QoL following subarachnoid hemorrhage. MH has a greater impact on life quality compared to physical disabilities, leading to prolonged DW. A singular physical impairment was rather atypical for a perceived worse outcome. Patient profiles revealed that clinical history, sex, psychological stress, and pain each contribute uniquely to QoL and work capacity. Prioritizing MH in assessing workability and rehabilitation is crucial for survivors' long-term outcome.

S <math xmlns="http://www.w3.org/1998/Math/MathML"><mmultiscripts><mrow></mrow> <mrow></mrow> <mn>3</mn></mmultiscripts> </math> TU-Net: Structured convolution and superpixel transformer for lung nodule segmentation.

Wu Y, Liu X, Shi Y, Chen X, Wang Z, Xu Y, Wang S

pubmed logopapersAug 20 2025
Accurate segmentation of lung adenocarcinoma nodules in computed tomography (CT) images is critical for clinical staging and diagnosis. However, irregular nodule shapes and ambiguous boundaries pose significant challenges for existing methods. This study introduces S<sup>3</sup>TU-Net, a hybrid CNN-Transformer architecture designed to enhance feature extraction, fusion, and global context modeling. The model integrates three key innovations: (1) structured convolution blocks (DWF-Conv/D<sup>2</sup>BR-Conv) for multi-scale feature extraction and overfitting mitigation; (2) S<sup>2</sup>-MLP Link, a spatial-shift-enhanced skip-connection module to improve multi-level feature fusion; and 3) residual-based superpixel vision transformer (RM-SViT) to capture long-range dependencies efficiently. Evaluated on the LIDC-IDRI dataset, S<sup>3</sup>TU-Net achieves a Dice score of 89.04%, precision of 90.73%, and IoU of 90.70%, outperforming recent methods by 4.52% in Dice. Validation on the EPDB dataset further confirms its generalizability (Dice, 86.40%). This work contributes to bridging the gap between local feature sensitivity and global context awareness by integrating structured convolutions and superpixel-based transformers, offering a robust tool for clinical decision support.
Page 31 of 1331328 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.