Sort by:
Page 60 of 2372364 results

Predictive model integrating deep learning and clinical features based on ultrasound imaging data for surgical intervention in intussusception in children younger than 8 months.

Qian YF, Zhou JJ, Shi SL, Guo WL

pubmed logopapersAug 22 2025
The objective of this study was to identify risk factors for enema reduction failure and to establish a combined model that integrates deep learning (DL) features and clinical features for predicting surgical intervention in intussusception in children younger than 8 months of age. A retrospective study with a prospective validation cohort of intussusception. The retrospective data were collected from two hospitals in south east China between January 2017 and December 2022. The prospective data were collected between January 2023 and July 2024. A total of 415 intussusception cases in patients younger than 8 months were included in the study. 280 cases collected from Centre 1 were randomly divided into two groups at a 7:3 ratio: the training cohort (n=196) and the internal validation cohort (n=84). 85 cases collected from Centre 2 were designed as external validation cohort. Pretrained DL networks were used to extract deep transfer learning features, with least absolute shrinkage and selection operator regression selecting the non-zero coefficient features. The clinical features were screened by univariate and multivariate logistic regression analyses. We constructed a combined model that integrated the selected two types of features, along with individual clinical and DL models for comparison. Additionally, the combined model was validated in a prospective cohort (n=50) collected from Centre 1. In the internal and external validation cohorts, the combined model (area under curve (AUC): 0.911 and 0.871, respectively) demonstrated better performance for predicting surgical intervention in intussusception in children younger than 8 months of age than the clinical model (AUC: 0.776 and 0.740, respectively) and the DL model (AUC: 0.828 and 0.793, respectively). In the prospective validation cohort, the combined model also demonstrated impressive performance with an AUC of 0.890. The combined model, integrating DL and clinical features, demonstrated stable predictive accuracy, suggesting its potential for improving clinical therapeutic strategies for intussusception.

Initial Recurrence Risk Stratification of Papillary Thyroid Cancer Based on Intratumoral and Peritumoral Dual Energy CT Radiomics.

Zhou Y, Xu Y, Si Y, Wu F, Xu X

pubmed logopapersAug 21 2025
This study aims to evaluate the potential of Dual-Energy Computed Tomography (DECT)-based radiomics in preoperative risk stratification for the prediction of initial recurrence in Papillary Thyroid Carcinoma (PTC). The retrospective analysis included 236 PTC cases (165 in the training cohort, 71 in the validation cohort) collected between July 2020 and June 2021. Tumor segmentation was carried out in both intratumoral and peritumoral areas (1 mm inner and outer to the tumor boundary). Three regionspecific rad-scores were developed (rad-score [VOI<sup>whole</sup>], rad-score [VOI<sup>outer layer</sup>], and rad-score [VOI<sup>inner layer</sup>]), respectively. Three radiomics models incorporating these rad-scores and additional risk factors were compared to a clinical model alone. The optimal radiomics model was presented as a nomogram. Rad-scores from peritumoral regions (VOI<sup>outer layer</sup> and VOI<sup>inner layer</sup>) outperformed the intratumoral rad-score (VOI<sup>whole</sup>). All radiomics models surpassed the clinical model, with peritumoral-based models (radiomics models 2 and 3) outperforming the intratumoral-based model (radiomics model 1). The top-performing nomogram, which included tumor size, tumor site, and rad-score (VOI<sup>inner layer</sup>), achieved an Area Under the Curve (AUC) of 0.877 in the training cohort and 0.876 in the validation cohort. The nomogram demonstrated good calibration, clinical utility, and stability. DECT-based intratumoral and peritumoral radiomics advance PTC initial recurrence risk prediction, providing clinical radiology with precise predictive tools. Further work is needed to refine the model and enhance its clinical application. Radiomics analysis of DECT, particularly in peritumoral regions, offers valuable predictive information for assessing the risk of initial recurrence in PTC.

CT-based machine learning model integrating intra- and peri-tumoral radiomics features for predicting occult lymph node metastasis in peripheral lung cancer.

Lu X, Liu F, E J, Cai X, Yang J, Wang X, Zhang Y, Sun B, Liu Y

pubmed logopapersAug 21 2025
Accurate preoperative assessment of occult lymph node metastasis (OLNM) plays a crucial role in informing therapeutic decision-making for lung cancer patients. Computed tomography (CT) is the most widely used imaging modality for preoperative work-up. The aim of this study was to develop and validate a CT-based machine learning model integrating intra-and peri-tumoral features to predict OLNM in lung cancer patients. Eligible patients with peripheral lung cancer confirmed by radical surgical excision with systematic lymphadenectomy were retrospectively recruited from January 2019 to December 2021. 1688 radiomics features were obtained from each manually segmented VOI which was composed of gross tumor volume (GTV) covering the boundary of entire tumor and three peritumoral volumes (PTV3, PTV6 and PTV9) that capture the region outside the tumor. A clinical-radiomics model incorporating radiomics signature, independent clinical factors and CT semantic features was established via multivariable logistic regression analysis and presented as a nomogram. Model performance was evaluated by discrimination, calibration, and clinical utility. Overall, 591 patients were recruited in the training cohort and 253 in the validation cohort. The radiomics signature of PTV9 showed superior diagnostic performance compared to PTV3 and PTV6 models. Integrating GPTV radiomics signature (incorporating Rad-score of GTV and PTV9) with clinical risk factor of serum CEA levels and CT imaging features of lobulation sign and tumor-pleura relationship demonstrated favorable accuracy in predicting OLNM in the training cohort (AUC, 0.819; 95% CI: 0.780-0.857) and validation cohort (AUC, 0.801; 95% CI: 0.741-0.860). The predictive performance of the clinical-radiomics model demonstrated statistically significant superiority over that of the clinical model in both cohorts (all p < 0.05). The clinical-radiomics model was able to serve as a noninvasive preoperative prediction tool for personalized risk assessment of OLNM in peripheral lung cancer patients.

Predicting Radiation Pneumonitis Integrating Clinical Information, Medical Text, and 2.5D Deep Learning Features in Lung Cancer.

Wang W, Ren M, Ren J, Dang J, Zhao X, Li C, Wang Y, Li G

pubmed logopapersAug 21 2025
To construct a prediction model for radiation pneumonitis (RP) in lung cancer patients based on clinical information, medical text, and 2.5D deep learning (DL) features. A total of 356 patients with lung cancer from the Heping Campus of the First Hospital of China Medical University were randomly divided at a 7:3 ratio into training and validation cohorts, and 238 patients from 3 other centers were included in the testing cohort for assessing model generalizability. We used the term frequency-inverse document frequency method to generate numerical vectors from computed tomography (CT) report texts. The CT and radiation therapy dose slices demonstrating the largest lung region of interest across the coronal and transverse planes were considered as the central slice; moreover, 3 slices above and below the central slice were selected to create comprehensive 2.5D data. We extracted DL features via DenseNet121, DenseNet201, and Twins-SVT and integrated them via multi-instance learning (MIL) fusion. The performances of the 2D and 3D DL models were also compared with the performance of the 2.5D MIL model. Finally, RP prediction models based on clinical information, medical text, and 2.5D DL features were constructed, validated, and tested. The 2.5D MIL model based on CT was significantly better than the 2D and 3D DL models in the training, validation, and test cohorts. The 2.5D MIL model based on radiation therapy dose was considered to be the optimal model in the test1 cohort, whereas the 2D model was considered to be the optimal model in the training, validation, and test3 cohorts, with the 3D model being the optimal model in the test2 cohort. A combined model achieved Area Under Curve values of 0.964, 0.877, 0.868, 0.884, and 0.849 in the training, validation, test1, test2, and test3 cohorts, respectively. We propose an RP prediction model that integrates clinical information, medical text, and 2.5D MIL features, which provides new ideas for predicting the side effects of radiation therapy.

Multimodal Integration in Health Care: Development With Applications in Disease Management.

Hao Y, Cheng C, Li J, Li H, Di X, Zeng X, Jin S, Han X, Liu C, Wang Q, Luo B, Zeng X, Li K

pubmed logopapersAug 21 2025
Multimodal data integration has emerged as a transformative approach in the health care sector, systematically combining complementary biological and clinical data sources such as genomics, medical imaging, electronic health records, and wearable device outputs. This approach provides a multidimensional perspective of patient health that enhances the diagnosis, treatment, and management of various medical conditions. This viewpoint presents an overview of the current state of multimodal integration in health care, spanning clinical applications, current challenges, and future directions. We focus primarily on its applications across different disease domains, particularly in oncology and ophthalmology. Other diseases are briefly discussed due to the few available literature. In oncology, the integration of multimodal data enables more precise tumor characterization and personalized treatment plans. Multimodal fusion demonstrates accurate prediction of anti-human epidermal growth factor receptor 2 therapy response (area under the curve=0.91). In ophthalmology, multimodal integration through the combination of genetic and imaging data facilitates the early diagnosis of retinal diseases. However, substantial challenges remain regarding data standardization, model deployment, and model interpretability. We also highlight the future directions of multimodal integration, including its expanded disease applications, such as neurological and otolaryngological diseases, and the trend toward large-scale multimodal models, which enhance accuracy. Overall, the innovative potential of multimodal integration is expected to further revolutionize the health care industry, providing more comprehensive and personalized solutions for disease management.

Artificial Intelligence-Driven Ultrasound Identifies Rare Triphasic Colon Cancer and Unlocks Candidate Genomic Mechanisms via Ultrasound Genomic Techniques.

Li X, Wang S, Kahlert UD, Zhou T, Xu K, Shi W, Yan X

pubmed logopapersAug 21 2025
<b><i>Background:</i></b> Colon cancer is a heterogeneous disease, and rare subtypes like triphasic colon cancer are difficult to detect with standard methods. Artificial intelligence (AI)-driven ultrasound combined with genomic analysis offers a promising approach to improve subtype identification and uncover molecular mechanisms. <b><i>Methods:</i></b> The authors used an AI-driven ultrasound model to identify rare triphasic colon cancer, characterized by a mix of epithelial, mesenchymal, and proliferative components. The molecular features were validated using immunohistochemistry, targeting classical epithelial markers, mesenchymal markers, and proliferation indices. Subsequently, ultrasound genomic techniques were applied to map transcriptomic alterations in conventional colon cancer onto ultrasound images. Differentially expressed genes were identified using the <i>edgeR</i> package. Pearson correlation analysis was performed to assess the relationship between imaging features and molecular markers. <b><i>Results:</i></b> The AI-driven ultrasound model successfully identified rare triphasic features in colon cancer. These imaging features showed significant correlation with immunohistochemical expression of epithelial markers, mesenchymal markers, and proliferation index. Moreover, ultrasound genomic techniques revealed that multiple oncogenic transcripts could be spatially mapped to distinct patterns within the ultrasound images of conventional colon cancer and were involved in classical cancer-related pathway. <b><i>Conclusions:</i></b> AI-enhanced ultrasound imaging enables noninvasive identification of rare triphasic colon cancer and reveals functional molecular signatures in general colon cancer. This integrative approach may support future precision diagnostics and image-guided therapies.

TPA: Temporal Prompt Alignment for Fetal Congenital Heart Defect Classification

Darya Taratynova, Alya Almsouti, Beknur Kalmakhanbet, Numan Saeed, Mohammad Yaqub

arxiv logopreprintAug 21 2025
Congenital heart defect (CHD) detection in ultrasound videos is hindered by image noise and probe positioning variability. While automated methods can reduce operator dependence, current machine learning approaches often neglect temporal information, limit themselves to binary classification, and do not account for prediction calibration. We propose Temporal Prompt Alignment (TPA), a method leveraging foundation image-text model and prompt-aware contrastive learning to classify fetal CHD on cardiac ultrasound videos. TPA extracts features from each frame of video subclips using an image encoder, aggregates them with a trainable temporal extractor to capture heart motion, and aligns the video representation with class-specific text prompts via a margin-hinge contrastive loss. To enhance calibration for clinical reliability, we introduce a Conditional Variational Autoencoder Style Modulation (CVAESM) module, which learns a latent style vector to modulate embeddings and quantifies classification uncertainty. Evaluated on a private dataset for CHD detection and on a large public dataset, EchoNet-Dynamic, for systolic dysfunction, TPA achieves state-of-the-art macro F1 scores of 85.40% for CHD diagnosis, while also reducing expected calibration error by 5.38% and adaptive ECE by 6.8%. On EchoNet-Dynamic's three-class task, it boosts macro F1 by 4.73% (from 53.89% to 58.62%). Temporal Prompt Alignment (TPA) is a framework for fetal congenital heart defect (CHD) classification in ultrasound videos that integrates temporal modeling, prompt-aware contrastive learning, and uncertainty quantification.

COVID19 Prediction Based On CT Scans Of Lungs Using DenseNet Architecture

Deborup Sanyal

arxiv logopreprintAug 21 2025
COVID19 took the world by storm since December 2019. A highly infectious communicable disease, COVID19 is caused by the SARSCoV2 virus. By March 2020, the World Health Organization (WHO) declared COVID19 as a global pandemic. A pandemic in the 21st century after almost 100 years was something the world was not prepared for, which resulted in the deaths of around 1.6 million people worldwide. The most common symptoms of COVID19 were associated with the respiratory system and resembled a cold, flu, or pneumonia. After extensive research, doctors and scientists concluded that the main reason for lives being lost due to COVID19 was failure of the respiratory system. Patients were dying gasping for breath. Top healthcare systems of the world were failing badly as there was an acute shortage of hospital beds, oxygen cylinders, and ventilators. Many were dying without receiving any treatment at all. The aim of this project is to help doctors decide the severity of COVID19 by reading the patient's Computed Tomography (CT) scans of the lungs. Computer models are less prone to human error, and Machine Learning or Neural Network models tend to give better accuracy as training improves over time. We have decided to use a Convolutional Neural Network model. Given that a patient tests positive, our model will analyze the severity of COVID19 infection within one month of the positive test result. The severity of the infection may be promising or unfavorable (if it leads to intubation or death), based entirely on the CT scans in the dataset.

Hierarchical Multi-Label Classification Model for CBCT-Based Extraction Socket Healing Assessment and Stratified Diagnostic Decision-Making to Assist Implant Treatment Planning.

Li Q, Han R, Huang J, Liu CB, Zhao S, Ge L, Zheng H, Huang Z

pubmed logopapersAug 21 2025
Dental implant treatment planning requires assessing extraction socket healing, yet current methods face challenges distinguishing soft tissue from woven bone on cone beam computed tomography (CBCT) imaging and lack standardized classification systems. In this study, we propose a hierarchical multilabel classification model for CBCT-based extraction socket healing assessment. We established a novel classification system dividing extraction socket healing status into two levels: Level 1 distinguishes physiological healing (Type I) from pathological healing (Type II); Level 2 is further subdivided into 5 subtypes. The HierTransFuse-Net architecture integrates ResNet50 with a two-dimensional transformer module for hierarchical multilabel classification. Additionally, a stratified diagnostic principle coupled with random forest algorithms supported personalized implant treatment planning. The HierTransFuse-Net model performed excellently in classifying extraction socket healing, achieving an mAccuracy of 0.9705, with mPrecision, mRecall, and mF1 scores of 0.9156, 0.9376, and 0.9253, respectively. The HierTransFuse-Net model demonstrated superior diagnostic reliability (κω = 0.9234) significantly exceeding that of clinical practitioners (mean κω = 0.7148, range: 0.6449-0.7843). The random forest model based on stratified diagnostic decision indicators achieved an accuracy of 81.48% and an mF1 score of 82.55% in predicting 12 clinical treatment pathways. This study successfully developed HierTransFuse-Net, which demonstrated excellent performance in distinguishing different extraction socket healing statuses and subtypes. Random forest algorithms based on stratified diagnostic indicators have shown potential for clinical pathway prediction. The hierarchical multilabel classification system simulates clinical diagnostic reasoning, enabling precise disease stratification and providing a scientific basis for personalized treatment decisions.

Spatial imaging features derived from SUVmax location in resectable NSCLC are associated with tumor aggressiveness.

Jiang Z, Spielvogel C, Haberl D, Yu J, Krisch M, Szakall S, Molnar P, Fillinger J, Horvath L, Renyi-Vamos F, Aigner C, Dome B, Lang C, Megyesfalvi Z, Kenner L, Hacker M

pubmed logopapersAug 21 2025
Accurate non-invasive prediction of histopathologic invasiveness and recurrence risk remains a clinical challenge in resectable non-small cell lung cancer (NSCLC). We developed and validated the Edge Proximity Score (EPS), a novel [<sup>18</sup>F]FDG PET/CT-based spatial imaging feature that quantifies the displacement of SUVmax relative to the tumor centroid and perimeter, to assess tumor aggressiveness and predict progression-free survival (PFS). This retrospective study included 244 NSCLC patients with preoperative [<sup>18</sup>F]FDG PET/CT. EPS was computed from normalized SUVmax-to-centroid and SUVmax-to-perimeter distances. A total of 115 PET radiomics features were extracted and standardized. Eight machine learning models (80:20 split) were trained to predict lymphovascular invasion (LVI), visceral pleural invasion (VPI), and spread through air spaces (STAS), with feature importance assessed using SHAP. Prognostic analysis was conducted using multivariable Cox regression. A survival prediction model incorporating EPS was externally validated in the TCIA cohort. RNA sequencing data from 76 TCIA patients were used for transcriptomic and immune profiling. EPS was significantly elevated in tumors with LVI, VPI, and STAS (P < 0.001), consistently ranked among the top SHAP features, and was an independent predictor of PFS (HR = 2.667, P = 0.015). The EPS-based nomogram achieved AUCs of 0.67, 0.70, and 0.68 for predicting 1-, 3-, and 5-year PFS in the TCIA validation cohort. High EPS was associated with proliferative and metabolic gene signatures, whereas low EPS was linked to immune activation and neutrophil infiltration. EPS is a biologically relevant, non-invasive imaging biomarker that may improve risk stratification in NSCLC.
Page 60 of 2372364 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.