Sort by:
Page 30 of 1331328 results

Ascending Aortic Dimensions and Body Size: Allometric Scaling, Normative Values, and Prognostic Performance.

Tavolinejad H, Beeche C, Dib MJ, Pourmussa B, Damrauer SM, DePaolo J, Azzo JD, Salman O, Duda J, Gee J, Kun S, Witschey WR, Chirinos JA

pubmed logopapersAug 21 2025
Ascending aortic (AscAo) dimensions partially depend on body size. Ratiometric (linear) indexing of AscAo dimensions to height and body surface area (BSA) are currently recommended, but it is unclear whether these allometric relationships are indeed linear. This study aimed to evaluate allometric relations, normative values, and the prognostic performance of AscAo dimension indices. We studied UK Biobank (UKB) (n = 49,271) and Penn Medicine BioBank (PMBB) (n = 8,426) participants. A convolutional neural network was used to segment the thoracic aorta from available magnetic resonance and computed tomography thoracic images. Normal allometric exponents of AscAo dimensions were derived from log-log models among healthy reference subgroups. Prognostic associations of AscAo dimensions were assessed with the use of Cox models. Among reference subgroups of both UKB (n = 11,310; age 52 ± 8 years; 37% male) and PMBB (n = 799; age 50 ± 16 years; 41% male), diameter/height, diameter/BSA, and area/BSA exhibited highly nonlinear relationships. In contrast, the allometric exponent of the area/height index was close to unity (UKB: 1.04; PMBB: 1.13). Accordingly, the linear ratio of area/height index did not exhibit residual associations with height (UKB: R<sup>2</sup> = 0.04 [P = 0.411]; PMBB: R<sup>2</sup> = 0.08 [P = 0.759]). Across quintiles of height and BSA, area/height was the only ratiometric index that consistently classified aortic dilation, whereas all other indices systematically underestimated or overestimated AscAo dilation at the extremes of body size. Area/height was robustly associated with thoracic aorta events in the UKB (HR: 3.73; P < 0.001) and the PMBB (HR: 1.83; P < 0.001). Among AscAo indices, area/height was allometrically correct, did not exhibit residual associations with body size, and was consistently associated with adverse events.

Deep Learning-Assisted Skeletal Muscle Radiation Attenuation at C3 Predicts Survival in Head and Neck Cancer

Barajas Ordonez, F., Xie, K., Ferreira, A., Siepmann, R., Chargi, N., Nebelung, S., Truhn, D., Berge, S., Bruners, P., Egger, J., Hölzle, F., Wirth, M., Kuhl, C., Puladi, B.

medrxiv logopreprintAug 21 2025
BackgroundHead and neck cancer (HNC) patients face an increased risk of malnutrition due to lifestyle, tumor localization, and treatment effects. While skeletal muscle area (SMA) and radiation attenuation (SM-RA) at the third lumbar vertebra (L3) are established prognostic markers, L3 is not routinely available in head and neck imaging. The prognostic value of SM-RA at the third cervical vertebra (C3) remains unclear. This study assesses whether SMA and SM-RA at C3 predict locoregional control (LRC) and overall survival (OS) in HNC. MethodsWe analyzed 904 HNC cases with head and neck CT scans. A deep learning pipeline identified C3, and SMA/SM-RA were quantified via automated segmentation with manual verification. Cox proportional hazards models assessed associations with LRC and OS, adjusting for clinical factors. ResultsMedian SMA and SM-RA were 36.64 cm{superscript 2} (IQR: 30.12-42.44) and 50.77 HU (IQR: 43.04-57.39). In multivariate analysis, lower SMA (HR 1.62, 95% CI: 1.02-2.58, p = 0.04), lower SM-RA (HR 1.89, 95% CI: 1.30-2.79, p < 0.001), and advanced T stage (HR 1.50, 95% CI: 1.06-2.12, p = 0.02) were prognostic for LRC. OS predictors included advanced T stage (HR 2.17, 95% CI: 1.64-2.87, p < 0.001), age [&ge;]70 years (HR 1.40, 95% CI: 1.00-1.96, p = 0.05), male sex (HR 1.64, 95% CI: 1.02-2.63, p = 0.04), and lower SM-RA (HR 2.15, 95% CI: 1.56-2.96, p < 0.001). ConclusionDeep learning-assisted SM-RA assessment at C3 outperforms SMA for LRC and OS in HNC, supporting its use as a routine biomarker and L3 alternative.

Hessian-Based Lightweight Neural Network HessNet for State-of-the-Art Brain Vessel Segmentation on a Minimal Training Dataset

Alexandra Bernadotte, Elfimov Nikita, Mikhail Shutov, Ivan Menshikov

arxiv logopreprintAug 21 2025
Accurate segmentation of blood vessels in brain magnetic resonance angiography (MRA) is essential for successful surgical procedures, such as aneurysm repair or bypass surgery. Currently, annotation is primarily performed through manual segmentation or classical methods, such as the Frangi filter, which often lack sufficient accuracy. Neural networks have emerged as powerful tools for medical image segmentation, but their development depends on well-annotated training datasets. However, there is a notable lack of publicly available MRA datasets with detailed brain vessel annotations. To address this gap, we propose a novel semi-supervised learning lightweight neural network with Hessian matrices on board for 3D segmentation of complex structures such as tubular structures, which we named HessNet. The solution is a Hessian-based neural network with only 6000 parameters. HessNet can run on the CPU and significantly reduces the resource requirements for training neural networks. The accuracy of vessel segmentation on a minimal training dataset reaches state-of-the-art results. It helps us create a large, semi-manually annotated brain vessel dataset of brain MRA images based on the IXI dataset (annotated 200 images). Annotation was performed by three experts under the supervision of three neurovascular surgeons after applying HessNet. It provides high accuracy of vessel segmentation and allows experts to focus only on the most complex important cases. The dataset is available at https://git.scinalytics.com/terilat/VesselDatasetPartly.

Automated Deep Learning Pipeline for Callosal Angle Quantification

shirzadeh barough, s., Bilgel, M., Ventura, C., Moghekar, A., Albert, M., Miller, M. I., Moghekar, A.

medrxiv logopreprintAug 21 2025
BACKGROUND AND PURPOSENormal pressure hydrocephalus (NPH) is a potentially treatable neurodegenerative disorder that remains underdiagnosed due to its clinical overlap with other conditions and the labor-intensive nature of manual imaging analyses. Imaging biomarkers, such as the callosal angle (CA), Evans Index (EI), and Disproportionately Enlarged Subarachnoid Space Hydrocephalus (DESH), play a crucial role in NPH diagnosis but are often limited by subjective interpretations. To address these challenges, we developed a fully automated and robust deep learning framework for measuring the CA directly from raw T1 MPRAGE and non-MPRAGE MRI scans. MATERIALS AND METHODSOur method integrates two complementary modules. First, a BrainSignsNET model is employed to accurately detect key anatomical landmarks, notably the anterior commissure (AC) and posterior commissure (PC). Preprocessed 3D MRI scans, reoriented to the Right Anterior Superior (RAS) system and resized to standardized cubes while preserving aspect ratios, serve as input for landmark localization. After detecting these landmarks, a coronal slice, perpendicular to the AC-PC line at the PC level, is extracted for subsequent analysis. Second, a UNet-based segmentation network, featuring a pretrained EfficientNetB0 encoder, generates multiclass masks of the lateral ventricles from the coronal slices which then used for calculation of the Callosal Angle. RESULTSTraining and internal validation were performed using datasets from the Baltimore Longitudinal Study of Aging (BLSA) and BIOCARD, while external validation utilized 216 clinical MRI scans from Johns Hopkins Bayview Hospital. Our framework achieved high concordance with manual measurements, demonstrating a strong correlation (r = 0.98, p < 0.001) and a mean absolute error (MAE) of 2.95 (SD 1.58) degrees. Moreover, error analysis confirmed that CA measurement performance was independent of patient age, gender, and EI, underscoring the broad applicability of this method. CONCLUSIONSThese results indicate that our fully automated CA measurement framework is a reliable and reproducible alternative to manual methods, outperforms reported interobserver variability in assessing the callosal angle, and offers significant potential to enhance early detection and diagnosis of NPH in both research and clinical settings.

Clinically-Informed Preprocessing Improves Stroke Segmentation in Low-Resource Settings

Juampablo E. Heras Rivera, Hitender Oswal, Tianyi Ren, Yutong Pan, William Henry, Caitlin M. Neher, Mehmet Kurt

arxiv logopreprintAug 21 2025
Stroke is among the top three causes of death worldwide, and accurate identification of ischemic stroke lesion boundaries from imaging is critical for diagnosis and treatment. The main imaging modalities used include magnetic resonance imaging (MRI), particularly diffusion weighted imaging (DWI), and computed tomography (CT)-based techniques such as non-contrast CT (NCCT), contrast-enhanced CT angiography (CTA), and CT perfusion (CTP). DWI is the gold standard for the identification of lesions but has limited applicability in low-resource settings due to prohibitive costs. CT-based imaging is currently the most practical imaging method in low-resource settings due to low costs and simplified logistics, but lacks the high specificity of MRI-based methods in monitoring ischemic insults. Supervised deep learning methods are the leading solution for automated ischemic stroke lesion segmentation and provide an opportunity to improve diagnostic quality in low-resource settings by incorporating insights from DWI when segmenting from CT. Here, we develop a series of models which use CT images taken upon arrival as inputs to predict follow-up lesion volumes annotated from DWI taken 2-9 days later. Furthermore, we implement clinically motivated preprocessing steps and show that the proposed pipeline results in a 38% improvement in Dice score over 10 folds compared to a nnU-Net model trained with the baseline preprocessing. Finally, we demonstrate that through additional preprocessing of CTA maps to extract vessel segmentations, we further improve our best model by 21% over 5 folds.

LGMSNet: Thinning a medical image segmentation model via dual-level multiscale fusion

Chengqi Dong, Fenghe Tang, Rongge Mao, Xinpei Gao, S. Kevin Zhou

arxiv logopreprintAug 21 2025
Medical image segmentation plays a pivotal role in disease diagnosis and treatment planning, particularly in resource-constrained clinical settings where lightweight and generalizable models are urgently needed. However, existing lightweight models often compromise performance for efficiency and rarely adopt computationally expensive attention mechanisms, severely restricting their global contextual perception capabilities. Additionally, current architectures neglect the channel redundancy issue under the same convolutional kernels in medical imaging, which hinders effective feature extraction. To address these challenges, we propose LGMSNet, a novel lightweight framework based on local and global dual multiscale that achieves state-of-the-art performance with minimal computational overhead. LGMSNet employs heterogeneous intra-layer kernels to extract local high-frequency information while mitigating channel redundancy. In addition, the model integrates sparse transformer-convolutional hybrid branches to capture low-frequency global information. Extensive experiments across six public datasets demonstrate LGMSNet's superiority over existing state-of-the-art methods. In particular, LGMSNet maintains exceptional performance in zero-shot generalization tests on four unseen datasets, underscoring its potential for real-world deployment in resource-limited medical scenarios. The whole project code is in https://github.com/cq-dong/LGMSNet.

DCE-UNet: A Transformer-Based Fully Automated Segmentation Network for Multiple Adolescent Spinal Disorders in X-ray Images.

Xue Z, Deng S, Yue Y, Chen C, Li Z, Yang Y, Sun S, Liu Y

pubmed logopapersAug 21 2025
In recent years, spinal X-ray image segmentation has played a vital role in the computer-aided diagnosis of various adolescent spinal disorders. However, due to the complex morphology of lesions and the fact that most existing methods are tailored to single-disease scenarios, current segmentation networks struggle to balance local detail preservation and global structural understanding across different disease types. As a result, they often suffer from limited accuracy, insufficient robustness, and poor adaptability. To address these challenges, we propose a novel fully automated spinal segmentation network, DCE-UNet, which integrates the local modeling strength of convolutional neural networks (CNNs) with the global contextual awareness of Transformers. The network introduces several architectural and feature fusion innovations. Specifically, a lightweight Transformer module is incorporated in the encoder to model high-level semantic features and enhance global contextual understanding. In the decoder, a Rec-Block module combining residual convolution and channel attention is designed to improve feature reconstruction and multi-scale fusion during the upsampling process. Additionally, the downsampling feature extraction path integrates a novel DC-Block that fuses channel and spatial attention mechanisms, enhancing the network's ability to represent complex lesion structures. Experiments conducted on a self-constructed large-scale multi-disease adolescent spinal X-ray dataset demonstrate that DCE-UNet achieves a Dice score of 91.3%, a mean Intersection over Union (mIoU) of 84.1, and a Hausdorff Distance (HD) of 4.007, outperforming several state-of-the-art comparison networks. Validation on real segmentation tasks further confirms that DCE-UNet delivers consistently superior performance across various lesion regions, highlighting its strong adaptability to multiple pathologies and promising potential for clinical application.

Development and verification of a convolutional neural network-based model for automatic mandibular canal localization on multicenter CBCT images.

Pan X, Wang C, Luo X, Dong Q, Sun H, Zhang W, Qu H, Deng R, Lin Z

pubmed logopapersAug 21 2025
Development and verification of a convolutional neural network (CNN)-based deep learning (DL) model for mandibular canal (MC) localization on multicenter cone beam computed tomography (CBCT) images. In this study, a total 1056 CBCT scans in multiple centers were collected. Of these, 836 CBCT scans of one manufacturer were used for development of CNN model (training set: validation set: internal testing set = 640:360:36) and an external testing dataset of 220 CBCT scans from other four manufacturers were tested. The convolution module was built using a stack of Conv + InstanceNorm + LeakyReLU. Average symmetric surface distance (ASSD) and symmetric mean curve distance (SMCD) were used for quantitative evaluation of this model for both internal testing data and partial external testing data. Visual scoring (1-5 points) were performed to evaluate the accuracy and generalizability of MC localization for all external testing data. The differences of ASSD, SMCD and visual scores among the four manufacturers were compared for external testing dataset. The time of manual and automatic MC localization were recorded. For the internal testing dataset, the average ASSD and SMCD was 0.486 mm and 0.298 mm respectively. For the external testing dataset, 86.8% CBCT scans' visual scores ≥ 4 points; the average ASSD and SMCD of 40 CBCT scans with visual scores ≥ 4 points were 0.438 mm and 0.185 mm respectively; there were significant differences among the four manufacturers for ASSD, SMCD and visual scores (p < 0.05). And the time for bilateral automatic MC localization was 8.52s (± 0.97s). In this study, a CNN model was developed for automatic MC localization, and external testing of large sample on multicenter CBCT images showed its excellent clinical application potential.

Label Uncertainty for Ultrasound Segmentation

Malini Shivaram, Gautam Rajendrakumar Gare, Laura Hutchins, Jacob Duplantis, Thomas Deiss, Thales Nogueira Gomes, Thong Tran, Keyur H. Patel, Thomas H Fox, Amita Krishnan, Deva Ramanan, Bennett DeBoisblanc, Ricardo Rodriguez, John Galeotti

arxiv logopreprintAug 21 2025
In medical imaging, inter-observer variability among radiologists often introduces label uncertainty, particularly in modalities where visual interpretation is subjective. Lung ultrasound (LUS) is a prime example-it frequently presents a mixture of highly ambiguous regions and clearly discernible structures, making consistent annotation challenging even for experienced clinicians. In this work, we introduce a novel approach to both labeling and training AI models using expert-supplied, per-pixel confidence values. Rather than treating annotations as absolute ground truth, we design a data annotation protocol that captures the confidence that radiologists have in each labeled region, modeling the inherent aleatoric uncertainty present in real-world clinical data. We demonstrate that incorporating these confidence values during training leads to improved segmentation performance. More importantly, we show that this enhanced segmentation quality translates into better performance on downstream clinically-critical tasks-specifically, estimating S/F oxygenation ratio values, classifying S/F ratio change, and predicting 30-day patient readmission. While we empirically evaluate many methods for exposing the uncertainty to the learning model, we find that a simple approach that trains a model on binarized labels obtained with a (60%) confidence threshold works well. Importantly, high thresholds work far better than a naive approach of a 50% threshold, indicating that training on very confident pixels is far more effective. Our study systematically investigates the impact of training with varying confidence thresholds, comparing not only segmentation metrics but also downstream clinical outcomes. These results suggest that label confidence is a valuable signal that, when properly leveraged, can significantly enhance the reliability and clinical utility of AI in medical imaging.

Sarcopenia Assessment Using Fully Automated Deep Learning Predicts Cardiac Allograft Survival in Heart Transplant Recipients.

Lang FM, Liu J, Clerkin KJ, Driggin EA, Einstein AJ, Sayer GT, Takeda K, Uriel N, Summers RM, Topkara VK

pubmed logopapersAug 20 2025
Sarcopenia is associated with adverse outcomes in patients with end-stage heart failure. Muscle mass can be quantified via manual segmentation of computed tomography images, but this approach is time-consuming and subject to interobserver variability. We sought to determine whether fully automated assessment of radiographic sarcopenia by deep learning would predict heart transplantation outcomes. This retrospective study included 164 adult patients who underwent heart transplantation between January 2013 and December 2022. A deep learning-based tool was utilized to automatically calculate cross-sectional skeletal muscle area at the T11, T12, and L1 levels on chest computed tomography. Radiographic sarcopenia was defined as skeletal muscle index (skeletal muscle area divided by height squared) in the lowest sex-specific quartile. The study population had a mean age of 53±14 years and was predominantly male (75%) with a nonischemic cause (73%). Mean skeletal muscle index was 28.3±7.6 cm<sup>2</sup>/m<sup>2</sup> for females versus 33.1±8.1 cm<sup>2</sup>/m<sup>2</sup> for males (<i>P</i><0.001). Cardiac allograft survival was significantly lower in heart transplant recipients with versus without radiographic sarcopenia at T11 (90% versus 98% at 1 year, 83% versus 97% at 3 years, log-rank <i>P</i>=0.02). After multivariable adjustment, radiographic sarcopenia at T11 was associated with an increased risk of cardiac allograft loss or death (hazard ratio, 3.86 [95% CI, 1.35-11.0]; <i>P</i>=0.01). Patients with radiographic sarcopenia also had a significantly increased hospital length of stay (28 [interquartile range, 19-33] versus 20 [interquartile range, 16-31] days; <i>P</i>=0.046). Fully automated quantification of radiographic sarcopenia using pretransplant chest computed tomography successfully predicts cardiac allograft survival. By avoiding interobserver variability and accelerating computation, this approach has the potential to improve candidate selection and outcomes in heart transplantation.
Page 30 of 1331328 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.