Sort by:
Page 71 of 3493486 results

Deep Learning-Assisted Skeletal Muscle Radiation Attenuation at C3 Predicts Survival in Head and Neck Cancer

Barajas Ordonez, F., Xie, K., Ferreira, A., Siepmann, R., Chargi, N., Nebelung, S., Truhn, D., Berge, S., Bruners, P., Egger, J., Hölzle, F., Wirth, M., Kuhl, C., Puladi, B.

medrxiv logopreprintAug 21 2025
BackgroundHead and neck cancer (HNC) patients face an increased risk of malnutrition due to lifestyle, tumor localization, and treatment effects. While skeletal muscle area (SMA) and radiation attenuation (SM-RA) at the third lumbar vertebra (L3) are established prognostic markers, L3 is not routinely available in head and neck imaging. The prognostic value of SM-RA at the third cervical vertebra (C3) remains unclear. This study assesses whether SMA and SM-RA at C3 predict locoregional control (LRC) and overall survival (OS) in HNC. MethodsWe analyzed 904 HNC cases with head and neck CT scans. A deep learning pipeline identified C3, and SMA/SM-RA were quantified via automated segmentation with manual verification. Cox proportional hazards models assessed associations with LRC and OS, adjusting for clinical factors. ResultsMedian SMA and SM-RA were 36.64 cm{superscript 2} (IQR: 30.12-42.44) and 50.77 HU (IQR: 43.04-57.39). In multivariate analysis, lower SMA (HR 1.62, 95% CI: 1.02-2.58, p = 0.04), lower SM-RA (HR 1.89, 95% CI: 1.30-2.79, p < 0.001), and advanced T stage (HR 1.50, 95% CI: 1.06-2.12, p = 0.02) were prognostic for LRC. OS predictors included advanced T stage (HR 2.17, 95% CI: 1.64-2.87, p < 0.001), age [&ge;]70 years (HR 1.40, 95% CI: 1.00-1.96, p = 0.05), male sex (HR 1.64, 95% CI: 1.02-2.63, p = 0.04), and lower SM-RA (HR 2.15, 95% CI: 1.56-2.96, p < 0.001). ConclusionDeep learning-assisted SM-RA assessment at C3 outperforms SMA for LRC and OS in HNC, supporting its use as a routine biomarker and L3 alternative.

CT-based machine learning model integrating intra- and peri-tumoral radiomics features for predicting occult lymph node metastasis in peripheral lung cancer.

Lu X, Liu F, E J, Cai X, Yang J, Wang X, Zhang Y, Sun B, Liu Y

pubmed logopapersAug 21 2025
Accurate preoperative assessment of occult lymph node metastasis (OLNM) plays a crucial role in informing therapeutic decision-making for lung cancer patients. Computed tomography (CT) is the most widely used imaging modality for preoperative work-up. The aim of this study was to develop and validate a CT-based machine learning model integrating intra-and peri-tumoral features to predict OLNM in lung cancer patients. Eligible patients with peripheral lung cancer confirmed by radical surgical excision with systematic lymphadenectomy were retrospectively recruited from January 2019 to December 2021. 1688 radiomics features were obtained from each manually segmented VOI which was composed of gross tumor volume (GTV) covering the boundary of entire tumor and three peritumoral volumes (PTV3, PTV6 and PTV9) that capture the region outside the tumor. A clinical-radiomics model incorporating radiomics signature, independent clinical factors and CT semantic features was established via multivariable logistic regression analysis and presented as a nomogram. Model performance was evaluated by discrimination, calibration, and clinical utility. Overall, 591 patients were recruited in the training cohort and 253 in the validation cohort. The radiomics signature of PTV9 showed superior diagnostic performance compared to PTV3 and PTV6 models. Integrating GPTV radiomics signature (incorporating Rad-score of GTV and PTV9) with clinical risk factor of serum CEA levels and CT imaging features of lobulation sign and tumor-pleura relationship demonstrated favorable accuracy in predicting OLNM in the training cohort (AUC, 0.819; 95% CI: 0.780-0.857) and validation cohort (AUC, 0.801; 95% CI: 0.741-0.860). The predictive performance of the clinical-radiomics model demonstrated statistically significant superiority over that of the clinical model in both cohorts (all p < 0.05). The clinical-radiomics model was able to serve as a noninvasive preoperative prediction tool for personalized risk assessment of OLNM in peripheral lung cancer patients.

Integrating Imaging-Derived Clinical Endotypes with Plasma Proteomics and External Polygenic Risk Scores Enhances Coronary Microvascular Disease Risk Prediction

Venkatesh, R., Cherlin, T., Penn Medicine BioBank,, Ritchie, M. D., Guerraty, M., Verma, S. S.

medrxiv logopreprintAug 21 2025
Coronary microvascular disease (CMVD) is an underdiagnosed but significant contributor to the burden of ischemic heart disease, characterized by angina and myocardial infarction. The development of risk prediction models such as polygenic risk scores (PRS) for CMVD has been limited by a lack of large-scale genome-wide association studies (GWAS). However, there is significant overlap between CMVD and enrollment criteria for coronary artery disease (CAD) GWAS. In this study, we developed CMVD PRS models by selecting variants identified in a CMVD GWAS and applying weights from an external CAD GWAS, using CMVD-associated loci as proxies for the genetic risk. We integrated plasma proteomics, clinical measures from perfusion PET imaging, and PRS to evaluate their contributions to CMVD risk prediction in comprehensive machine and deep learning models. We then developed a novel unsupervised endotyping framework for CMVD from perfusion PET-derived myocardial blood flow data, revealing distinct patient subgroups beyond traditional case-control definitions. This imaging-based stratification substantially improved classification performance alongside plasma proteomics and PRS, achieving AUROCs between 0.65 and 0.73 per class, significantly outperforming binary classifiers and existing clinical models, highlighting the potential of this stratification approach to enable more precise and personalized diagnosis by capturing the underlying heterogeneity of CMVD. This work represents the first application of imaging-based endotyping and the integration of genetic and proteomic data for CMVD risk prediction, establishing a framework for multimodal modeling in complex diseases.

Initial Recurrence Risk Stratification of Papillary Thyroid Cancer Based on Intratumoral and Peritumoral Dual Energy CT Radiomics.

Zhou Y, Xu Y, Si Y, Wu F, Xu X

pubmed logopapersAug 21 2025
This study aims to evaluate the potential of Dual-Energy Computed Tomography (DECT)-based radiomics in preoperative risk stratification for the prediction of initial recurrence in Papillary Thyroid Carcinoma (PTC). The retrospective analysis included 236 PTC cases (165 in the training cohort, 71 in the validation cohort) collected between July 2020 and June 2021. Tumor segmentation was carried out in both intratumoral and peritumoral areas (1 mm inner and outer to the tumor boundary). Three regionspecific rad-scores were developed (rad-score [VOI<sup>whole</sup>], rad-score [VOI<sup>outer layer</sup>], and rad-score [VOI<sup>inner layer</sup>]), respectively. Three radiomics models incorporating these rad-scores and additional risk factors were compared to a clinical model alone. The optimal radiomics model was presented as a nomogram. Rad-scores from peritumoral regions (VOI<sup>outer layer</sup> and VOI<sup>inner layer</sup>) outperformed the intratumoral rad-score (VOI<sup>whole</sup>). All radiomics models surpassed the clinical model, with peritumoral-based models (radiomics models 2 and 3) outperforming the intratumoral-based model (radiomics model 1). The top-performing nomogram, which included tumor size, tumor site, and rad-score (VOI<sup>inner layer</sup>), achieved an Area Under the Curve (AUC) of 0.877 in the training cohort and 0.876 in the validation cohort. The nomogram demonstrated good calibration, clinical utility, and stability. DECT-based intratumoral and peritumoral radiomics advance PTC initial recurrence risk prediction, providing clinical radiology with precise predictive tools. Further work is needed to refine the model and enhance its clinical application. Radiomics analysis of DECT, particularly in peritumoral regions, offers valuable predictive information for assessing the risk of initial recurrence in PTC.

Predicting Radiation Pneumonitis Integrating Clinical Information, Medical Text, and 2.5D Deep Learning Features in Lung Cancer.

Wang W, Ren M, Ren J, Dang J, Zhao X, Li C, Wang Y, Li G

pubmed logopapersAug 21 2025
To construct a prediction model for radiation pneumonitis (RP) in lung cancer patients based on clinical information, medical text, and 2.5D deep learning (DL) features. A total of 356 patients with lung cancer from the Heping Campus of the First Hospital of China Medical University were randomly divided at a 7:3 ratio into training and validation cohorts, and 238 patients from 3 other centers were included in the testing cohort for assessing model generalizability. We used the term frequency-inverse document frequency method to generate numerical vectors from computed tomography (CT) report texts. The CT and radiation therapy dose slices demonstrating the largest lung region of interest across the coronal and transverse planes were considered as the central slice; moreover, 3 slices above and below the central slice were selected to create comprehensive 2.5D data. We extracted DL features via DenseNet121, DenseNet201, and Twins-SVT and integrated them via multi-instance learning (MIL) fusion. The performances of the 2D and 3D DL models were also compared with the performance of the 2.5D MIL model. Finally, RP prediction models based on clinical information, medical text, and 2.5D DL features were constructed, validated, and tested. The 2.5D MIL model based on CT was significantly better than the 2D and 3D DL models in the training, validation, and test cohorts. The 2.5D MIL model based on radiation therapy dose was considered to be the optimal model in the test1 cohort, whereas the 2D model was considered to be the optimal model in the training, validation, and test3 cohorts, with the 3D model being the optimal model in the test2 cohort. A combined model achieved Area Under Curve values of 0.964, 0.877, 0.868, 0.884, and 0.849 in the training, validation, test1, test2, and test3 cohorts, respectively. We propose an RP prediction model that integrates clinical information, medical text, and 2.5D MIL features, which provides new ideas for predicting the side effects of radiation therapy.

Hierarchical Multi-Label Classification Model for CBCT-Based Extraction Socket Healing Assessment and Stratified Diagnostic Decision-Making to Assist Implant Treatment Planning.

Li Q, Han R, Huang J, Liu CB, Zhao S, Ge L, Zheng H, Huang Z

pubmed logopapersAug 21 2025
Dental implant treatment planning requires assessing extraction socket healing, yet current methods face challenges distinguishing soft tissue from woven bone on cone beam computed tomography (CBCT) imaging and lack standardized classification systems. In this study, we propose a hierarchical multilabel classification model for CBCT-based extraction socket healing assessment. We established a novel classification system dividing extraction socket healing status into two levels: Level 1 distinguishes physiological healing (Type I) from pathological healing (Type II); Level 2 is further subdivided into 5 subtypes. The HierTransFuse-Net architecture integrates ResNet50 with a two-dimensional transformer module for hierarchical multilabel classification. Additionally, a stratified diagnostic principle coupled with random forest algorithms supported personalized implant treatment planning. The HierTransFuse-Net model performed excellently in classifying extraction socket healing, achieving an mAccuracy of 0.9705, with mPrecision, mRecall, and mF1 scores of 0.9156, 0.9376, and 0.9253, respectively. The HierTransFuse-Net model demonstrated superior diagnostic reliability (κω = 0.9234) significantly exceeding that of clinical practitioners (mean κω = 0.7148, range: 0.6449-0.7843). The random forest model based on stratified diagnostic decision indicators achieved an accuracy of 81.48% and an mF1 score of 82.55% in predicting 12 clinical treatment pathways. This study successfully developed HierTransFuse-Net, which demonstrated excellent performance in distinguishing different extraction socket healing statuses and subtypes. Random forest algorithms based on stratified diagnostic indicators have shown potential for clinical pathway prediction. The hierarchical multilabel classification system simulates clinical diagnostic reasoning, enabling precise disease stratification and providing a scientific basis for personalized treatment decisions.

Deep Learning-Enhanced Single Breath-Hold Abdominal MRI at 0.55 T-Technical Feasibility and Image Quality Assessment.

Seifert AC, Breit HC, Obmann MM, Korolenko A, Nickel MD, Fenchel M, Boll DT, Vosshenrich J

pubmed logopapersAug 21 2025
Inherently lower signal-to-noise ratios hamper the broad clinical use of low-field abdominal MRI. This study aimed to investigate the technical feasibility and image quality of deep learning (DL)-enhanced T2 HASTE and T1 VIBE-Dixon abdominal MRI at 0.55 T. From July 2024 to September 2024, healthy volunteers underwent conventional and DL-enhanced 0.55 T abdominal MRI, including conventional T2 HASTE, fat-suppressed T2 HASTE (HASTE FS), and T1 VIBE-Dixon acquisitions, and DL-enhanced single- (HASTE DL<sub>SBH</sub>) and multi-breath-hold HASTE (HASTE DL<sub>MBH</sub>), fat-suppressed single- (HASTE FS DL<sub>SBH</sub>) and multi-breath-hold HASTE (HASTE FS DL<sub>MBH</sub>), and T1 VIBE-Dixon (VIBE-Dixon<sub>DL</sub>) acquisitions. Three abdominal radiologists evaluated the scans for quality parameters and artifacts (Likert scale 1-5), and incidental findings. Interreader agreement and comparative analyses were conducted. 33 healthy volunteers (mean age: 30±4years) were evaluated. Image quality was better for single breath-hold DL-enhanced MRI (all P<0.001) with good or better interreader agreement (κ≥0.61), including T2 HASTE (HASTE DL<sub>SBH</sub>: 4 [IQR: 4-4] vs. HASTE: 3 [3-3]), T2 HASTE FS (4 [4-4] vs. 3 [3-3]), and T1 VIBE-Dixon (4 [4-5] vs. 4 [3-4]). Similarly, image noise and spatial resolution were better for DL-MRI scans (P<0.001). No quality differences were found between single- and multi-breath-hold HASTE DL or HASTE FS DL (both: 4 [4-4]; P>0.572). The number and size of incidental lesions were identical between techniques (16 lesions; mean diameter 8±5 mm; P=1.000). DL-based image reconstruction enables single breath-hold T2 HASTE and T1 VIBE-Dixon abdominal imaging at 0.55 T with better image quality than conventional MRI.

Structure-Preserving Medical Image Generation from a Latent Graph Representation

Kevin Arias, Edwin Vargas, Kumar Vijay Mishra, Antonio Ortega, Henry Arguello

arxiv logopreprintAug 21 2025
Supervised learning techniques have proven their efficacy in many applications with abundant data. However, applying these methods to medical imaging is challenging due to the scarcity of data, given the high acquisition costs and intricate data characteristics of those images, thereby limiting the full potential of deep neural networks. To address the lack of data, augmentation techniques leverage geometry, color, and the synthesis ability of generative models (GMs). Despite previous efforts, gaps in the generation process limit the impact of data augmentation to improve understanding of medical images, e.g., the highly structured nature of some domains, such as X-ray images, is ignored. Current GMs rely solely on the network's capacity to blindly synthesize augmentations that preserve semantic relationships of chest X-ray images, such as anatomical restrictions, representative structures, or structural similarities consistent across datasets. In this paper, we introduce a novel GM that leverages the structural resemblance of medical images by learning a latent graph representation (LGR). We design an end-to-end model to learn (i) a LGR that captures the intrinsic structure of X-ray images and (ii) a graph convolutional network (GCN) that reconstructs the X-ray image from the LGR. We employ adversarial training to guide the generator and discriminator models in learning the distribution of the learned LGR. Using the learned GCN, our approach generates structure-preserving synthetic images by mapping generated LGRs to X-ray. Additionally, we evaluate the learned graph representation for other tasks, such as X-ray image classification and segmentation. Numerical experiments demonstrate the efficacy of our approach, increasing performance up to $3\%$ and $2\%$ for classification and segmentation, respectively.

DCE-UNet: A Transformer-Based Fully Automated Segmentation Network for Multiple Adolescent Spinal Disorders in X-ray Images.

Xue Z, Deng S, Yue Y, Chen C, Li Z, Yang Y, Sun S, Liu Y

pubmed logopapersAug 21 2025
In recent years, spinal X-ray image segmentation has played a vital role in the computer-aided diagnosis of various adolescent spinal disorders. However, due to the complex morphology of lesions and the fact that most existing methods are tailored to single-disease scenarios, current segmentation networks struggle to balance local detail preservation and global structural understanding across different disease types. As a result, they often suffer from limited accuracy, insufficient robustness, and poor adaptability. To address these challenges, we propose a novel fully automated spinal segmentation network, DCE-UNet, which integrates the local modeling strength of convolutional neural networks (CNNs) with the global contextual awareness of Transformers. The network introduces several architectural and feature fusion innovations. Specifically, a lightweight Transformer module is incorporated in the encoder to model high-level semantic features and enhance global contextual understanding. In the decoder, a Rec-Block module combining residual convolution and channel attention is designed to improve feature reconstruction and multi-scale fusion during the upsampling process. Additionally, the downsampling feature extraction path integrates a novel DC-Block that fuses channel and spatial attention mechanisms, enhancing the network's ability to represent complex lesion structures. Experiments conducted on a self-constructed large-scale multi-disease adolescent spinal X-ray dataset demonstrate that DCE-UNet achieves a Dice score of 91.3%, a mean Intersection over Union (mIoU) of 84.1, and a Hausdorff Distance (HD) of 4.007, outperforming several state-of-the-art comparison networks. Validation on real segmentation tasks further confirms that DCE-UNet delivers consistently superior performance across various lesion regions, highlighting its strong adaptability to multiple pathologies and promising potential for clinical application.

Ascending Aortic Dimensions and Body Size: Allometric Scaling, Normative Values, and Prognostic Performance.

Tavolinejad H, Beeche C, Dib MJ, Pourmussa B, Damrauer SM, DePaolo J, Azzo JD, Salman O, Duda J, Gee J, Kun S, Witschey WR, Chirinos JA

pubmed logopapersAug 21 2025
Ascending aortic (AscAo) dimensions partially depend on body size. Ratiometric (linear) indexing of AscAo dimensions to height and body surface area (BSA) are currently recommended, but it is unclear whether these allometric relationships are indeed linear. This study aimed to evaluate allometric relations, normative values, and the prognostic performance of AscAo dimension indices. We studied UK Biobank (UKB) (n = 49,271) and Penn Medicine BioBank (PMBB) (n = 8,426) participants. A convolutional neural network was used to segment the thoracic aorta from available magnetic resonance and computed tomography thoracic images. Normal allometric exponents of AscAo dimensions were derived from log-log models among healthy reference subgroups. Prognostic associations of AscAo dimensions were assessed with the use of Cox models. Among reference subgroups of both UKB (n = 11,310; age 52 ± 8 years; 37% male) and PMBB (n = 799; age 50 ± 16 years; 41% male), diameter/height, diameter/BSA, and area/BSA exhibited highly nonlinear relationships. In contrast, the allometric exponent of the area/height index was close to unity (UKB: 1.04; PMBB: 1.13). Accordingly, the linear ratio of area/height index did not exhibit residual associations with height (UKB: R<sup>2</sup> = 0.04 [P = 0.411]; PMBB: R<sup>2</sup> = 0.08 [P = 0.759]). Across quintiles of height and BSA, area/height was the only ratiometric index that consistently classified aortic dilation, whereas all other indices systematically underestimated or overestimated AscAo dilation at the extremes of body size. Area/height was robustly associated with thoracic aorta events in the UKB (HR: 3.73; P < 0.001) and the PMBB (HR: 1.83; P < 0.001). Among AscAo indices, area/height was allometrically correct, did not exhibit residual associations with body size, and was consistently associated with adverse events.
Page 71 of 3493486 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.