Sort by:
Page 65 of 2252246 results

CEREBLEED: Automated quantification and severity scoring of intracranial hemorrhage on non-contrast CT

Cepeda, S., Esteban-Sinovas, O., Arrese, I., Sarabia, R.

medrxiv logopreprintJun 13 2025
BackgroundIntracranial hemorrhage (ICH), whether spontaneous or traumatic, is a neurological emergency with high morbidity and mortality. Accurate assessment of severity is essential for neurosurgical decision-making. This study aimed to develop and evaluate a fully automated, deep learning-based tool for the standardized assessment of ICH severity, based on the segmentation of the hemorrhage and intracranial structures, and the computation of an objective severity index. MethodsNon-contrast cranial CT scans from patients with spontaneous or traumatic ICH were retrospectively collected from public datasets and a tertiary care center. Deep learning models were trained to segment hemorrhages and intracranial structures. These segmentations were used to compute a severity index reflecting bleeding burden and mass effect through volumetric relationships. Segmentation performance was evaluated on a hold-out test cohort. In a prospective cohort, the severity index was assessed in relation to expert-rated CT severity, clinical outcomes, and the need for urgent neurosurgical intervention. ResultsA total of 1,110 non-contrast cranial CT scans were analyzed, 900 from the retrospective cohort and 200 from the prospective evaluation cohort. The binary segmentation model achieved a median Dice score of 0.90 for total hemorrhage. The multilabel model yielded Dice scores ranging from 0.55 to 0.94 across hemorrhage subtypes. The severity index significantly correlated with expert-rated CT severity (p < 0.001), the modified Rankin Scale (p = 0.007), and the Glasgow Outcome Scale-Extended (p = 0.039), and independently predicted the need for urgent surgery (p < 0.001). A threshold [~]300 was identified as a decision point for surgical management (AUC = 0.83). ConclusionWe developed a fully automated and openly accessible pipeline for the analysis of non-contrast cranial CT in intracranial hemorrhage. It computes a novel index that objectively quantifies hemorrhage severity and is significantly associated with clinically relevant outcomes, including the need for urgent neurosurgical intervention.

Clinically reported covert cerebrovascular disease and risk of neurological disease: a whole-population cohort of 395,273 people using natural language processing

Iveson, M. H., Mukherjee, M., Davidson, E. M., Zhang, H., Sherlock, L., Ball, E. L., Mair, G., Hosking, A., Whalley, H., Poon, M. T. C., Wardlaw, J. M., Kent, D., Tobin, R., Grover, C., Alex, B., Whiteley, W. N.

medrxiv logopreprintJun 13 2025
ImportanceUnderstanding the relevance of covert cerebrovascular disease (CCD) for later health will allow clinicians to more effectively monitor and target interventions. ObjectiveTo examine the association between clinically reported CCD, measured using natural language processing (NLP), and subsequent disease risk. Design, Setting and ParticipantsWe conducted a retrospective e-cohort study using linked health record data. From all people with clinical brain imaging in Scotland from 2010 to 2018, we selected people with no prior hospitalisation for neurological disease. The data were analysed from March 2024 to June 2025. ExposureFour phenotypes were identified with NLP of imaging reports: white matter hypoattenuation or hyperintensities (WMH), lacunes, cortical infarcts and cerebral atrophy. Main outcomes and measuresHazard ratios (aHR) for stroke, dementia, and Parkinsons disease (conditions previously associated with CCD), epilepsy (a brain-based control condition) and colorectal cancer (a non-brain control condition), adjusted for age, sex, deprivation, region, scan modality, and pre-scan healthcare, were calculated for each phenotype. ResultsFrom 395,273 people with brain imaging and no history of neurological disease, 145,978 (37%) had [&ge;]1 phenotype. For each phenotype, the aHR of any stroke was: WMH 1.4 (95%CI: 1.3-1.4), lacunes 1.6 (1.5-1.6), cortical infarct 1.7 (1.6-1.8), and cerebral atrophy 1.1 (1.0-1.1). The aHR of any dementia was: WMH, 1.3 (1.3-1.3), lacunes, 1.0 (0.9-1.0), cortical infarct 1.1 (1.0-1.1) and cerebral atrophy 1.7 (1.7-1.7). The aHR of Parkinsons disease was, in people with a report of: WMH 1.1 (1.0-1.2), lacunes 1.1 (0.9-1.2), cortical infarct 0.7 (0.6-0.9) and cerebral atrophy 1.4 (1.3-1.5). The aHRs between CCD phenotypes and epilepsy and colorectal cancer overlapped the null. Conclusions and RelevanceNLP identified CCD and atrophy phenotypes from routine clinical image reports, and these had important associations with future stroke, dementia and Parkinsons disease. Prevention of neurological disease in people with CCD should be a priority for healthcare providers and policymakers. Key PointsO_ST_ABSQuestionC_ST_ABSAre measures of Covert Cerebrovascular Disease (CCD) associated with the risk of subsequent disease (stroke, dementia, Parkinsons disease, epilepsy, and colorectal cancer)? FindingsThis study used a validated NLP algorithm to identify CCD (white matter hypoattenuation/hyperintensities, lacunes, cortical infarcts) and cerebral atrophy from both MRI and computed tomography (CT) imaging reports generated during routine healthcare in >395K people in Scotland. In adjusted models, we demonstrate higher risk of dementia (particularly Alzheimers disease) in people with atrophy, and higher risk of stroke in people with cortical infarcts. However, associations with an age-associated control outcome (colorectal cancer) were neutral, supporting a causal relationship. It also highlights differential associations between cerebral atrophy and dementia and cortical infarcts and stroke risk. MeaningCCD or atrophy on brain imaging reports in routine clinical practice is associated with a higher risk of stroke or dementia. Evidence is needed to support treatment strategies to reduce this risk. NLP can identify these important, otherwise uncoded, disease phenotypes, allowing research at scale into imaging-based biomarkers of dementia and stroke.

OneTouch Automated Photoacoustic and Ultrasound Imaging of Breast in Standing Pose.

Zhang H, Zheng E, Zheng W, Huang C, Xi Y, Cheng Y, Yu S, Chakraborty S, Bonaccio E, Takabe K, Fan XC, Xu W, Xia J

pubmed logopapersJun 12 2025
We developed an automated photoacoustic and ultrasound breast tomography system that images the patient in the standing pose. The system, named OneTouch-PAT, utilized linear transducer arrays with optical-acoustic combiners for effective dual-modal imaging. During scanning, subjects only need to gently attach their breasts to the imaging window, and co-registered three-dimensional ultrasonic and photoacoustic images of the breast can be obtained within one minute. Our system has a large field of view of 17 cm by 15 cm and achieves an imaging depth of 3 cm with sub-millimeter resolution. A three-dimensional deep-learning network was also developed to further improve the image quality by improving the 3D resolution, enhancing vasculature, eliminating skin signals, and reducing noise. The performance of the system was tested on four healthy subjects and 61 patients with breast cancer. Our results indicate that the ultrasound structural information can be combined with the photoacoustic vascular information for better tissue characterization. Representative cases from different molecular subtypes have indicated different photoacoustic and ultrasound features that could potentially be used for imaging-based cancer classification. Statistical analysis among all patients indicates that the regional photoacoustic intensity and vessel branching points are indicators of breast malignancy. These promising results suggest that our system could significantly enhance breast cancer diagnosis and classification.

Radiomics and machine learning for predicting valve vegetation in infective endocarditis: a comparative analysis of mitral and aortic valves using TEE imaging.

Esmaely F, Moradnejad P, Boudagh S, Bitarafan-Rajabi A

pubmed logopapersJun 12 2025
Detecting valve vegetation in infective endocarditis (IE) poses challenges, particularly with mechanical valves, because acoustic shadowing artefacts often obscure critical diagnostic details. This study aimed to classify native and prosthetic mitral and aortic valves with and without vegetation using radiomics and machine learning. 286 TEE scans from suspected IE cases (August 2023-November 2024) were analysed alongside 113 rejected IE as control cases. Frames were preprocessed using the Extreme Total Variation Bilateral (ETVB) filter, and radiomics features were extracted for classification using machine learning models, including Random Forest, Decision Tree, SVM, k-NN, and XGBoost. in order to evaluate the models, AUC, ROC curves, and Decision Curve Analysis (DCA) were used. For native mitral valves, SVM achieved the highest performance with an AUC of 0.88, a sensitivity of 0.91, and a specificity of 0.87. Mechanical mitral valves also showed optimal results with SVM (AUC: 0.85, sensitivity: 0.73, specificity: 0.92). Native aortic valves were best classified using SVM (AUC: 0.86, sensitivity: 0.87, specificity: 0.86), while Random Forest excelled for mechanical aortic valves (AUC: 0.81, sensitivity: 0.89, specificity: 0.78). These findings suggest that combining the models with the clinician's report may enhance the diagnostic accuracy of TEE, particularly in the absence of advanced imaging methods like PET/CT.

High visceral-to-subcutaneous fat area ratio is an unfavorable prognostic indicator in patients with uterine sarcoma.

Kurokawa M, Gonoi W, Hanaoka S, Kurokawa R, Uehara S, Kato M, Suzuki M, Toyohara Y, Takaki Y, Kusakabe M, Kino N, Tsukazaki T, Unno T, Sone K, Abe O

pubmed logopapersJun 12 2025
Uterine sarcoma is a rare disease whose association with body composition parameters is poorly understood. This study explored the impact of body composition parameters on overall survival with uterine sarcoma. This multicenter study included 52 patients with uterine sarcomas treated at three Japanese hospitals between 2007 and 2023. A semi-automatic segmentation program based on deep learning analyzed transaxial CT images at the L3 vertebral level, calculating body composition parameters as follows: area indices (areas divided by height squared) of skeletal muscle, visceral and subcutaneous adipose tissue (SMI, VATI, and SATI, respectively); skeletal muscle density; and the visceral-to-subcutaneous fat area ratio (VSR). The optimal cutoff values for each parameter were calculated using maximally selected rank statistics with several p value approximations. The effects of body composition parameters and clinical data on overall survival (OS) and cancer-specific survival (CSS) were analyzed. Univariate Cox proportional hazards regression analysis revealed that advanced stage (III-IV) and high VSR were unfavorable prognostic factors for both OS and CSS. Multivariate Cox proportional hazard regression analysis revealed that advanced stage (III-IV) (hazard ratios (HRs), 4.67 for OS and 4.36 for CSS, p < 0.01), and high VSR (HRs, 9.36 for OS and 8.22 for CSS, p < 0.001) were poor prognostic factors for both OS and CSS. Added values were observed when the VSR was incorporated into the OS and the CSS prediction models. Increased VSR and tumor stage are significant predictors of poor overall survival in patients with uterine sarcoma.

Radiogenomic correlation of hypoxia-related biomarkers in clear cell renal cell carcinoma.

Shao Y, Cen HS, Dhananjay A, Pawan SJ, Lei X, Gill IS, D'souza A, Duddalwar VA

pubmed logopapersJun 12 2025
This study aimed to evaluate radiomic models' ability to predict hypoxia-related biomarker expression in clear cell renal cell carcinoma (ccRCC). Clinical and molecular data from 190 patients were extracted from The Cancer Genome Atlas-Kidney Renal Clear Cell Carcinoma dataset, and corresponding CT imaging data were manually segmented from The Cancer Imaging Archive. A panel of 2,824 radiomic features was analyzed, and robust, high-interscanner-reproducibility features were selected. Gene expression data for 13 hypoxia-related biomarkers were stratified by tumor grade (1/2 vs. 3/4) and stage (I/II vs. III/IV) and analyzed using Wilcoxon rank sum test. Machine learning modeling was conducted using the High-Performance Random Forest (RF) procedure in SAS Enterprise Miner 15.1, with significance at P < 0.05. Descriptive univariate analysis revealed significantly lower expression of several biomarkers in high-grade and late-stage tumors, with KLF6 showing the most notable decrease. The RF model effectively predicted the expression of KLF6, ETS1, and BCL2, as well as PLOD2 and PPARGC1A underexpression. Stratified performance assessment showed improved predictive ability for RORA, BCL2, and KLF6 in high-grade tumors and for ETS1 across grades, with no significant performance difference across grade or stage. The RF model demonstrated modest but significant associations between texture metrics derived from clinical CT scans, such as GLDM and GLCM, and key hypoxia-related biomarkers including KLF6, BCL2, ETS1, and PLOD2. These findings suggest that radiomic analysis could support ccRCC risk stratification and personalized treatment planning by providing non-invasive insights into tumor biology.

Tackling Tumor Heterogeneity Issue: Transformer-Based Multiple Instance Enhancement Learning for Predicting EGFR Mutation via CT Images.

Fang Y, Wang M, Song Q, Cao C, Gao Z, Song B, Min X, Li A

pubmed logopapersJun 12 2025
Accurate and non-invasive prediction of epidermal growth factor receptor (EGFR) mutation is crucial for the diagnosis and treatment of non-small cell lung cancer (NSCLC). While computed tomography (CT) imaging shows promise in identifying EGFR mutation, current prediction methods heavily rely on fully supervised learning, which overlooks the substantial heterogeneity of tumors and therefore leads to suboptimal results. To tackle tumor heterogeneity issue, this study introduces a novel weakly supervised method named TransMIEL, which leverages multiple instance learning techniques for accurate EGFR mutation prediction. Specifically, we first propose an innovative instance enhancement learning (IEL) strategy that strengthens the discriminative power of instance features for complex tumor CT images by exploring self-derived soft pseudo-labels. Next, to improve tumor representation capability, we design a spatial-aware transformer (SAT) that fully captures inter-instance relationships of different pathological subregions to mirror the diagnostic processes of radiologists. Finally, an instance adaptive gating (IAG) module is developed to effectively emphasize the contribution of informative instance features in heterogeneous tumors, facilitating dynamic instance feature aggregation and increasing model generalization performance. Experimental results demonstrate that TransMIEL significantly outperforms existing fully and weakly supervised methods on both public and in-house NSCLC datasets. Additionally, visualization results show that our approach can highlight intra-tumor and peri-tumor areas relevant to EGFR mutation status. Therefore, our method holds significant potential as an effective tool for EGFR prediction and offers a novel perspective for future research on tumor heterogeneity.

Task Augmentation-Based Meta-Learning Segmentation Method for Retinopathy.

Wang J, Mateen M, Xiang D, Zhu W, Shi F, Huang J, Sun K, Dai J, Xu J, Zhang S, Chen X

pubmed logopapersJun 12 2025
Deep learning (DL) requires large amounts of labeled data, which is extremely time-consuming and laborintensive to obtain for medical image segmentation tasks. Metalearning focuses on developing learning strategies that enable quick adaptation to new tasks with limited labeled data. However, rich-class medical image segmentation datasets for constructing meta-learning multi-tasks are currently unavailable. In addition, data collected from various healthcare sites and devices may present significant distribution differences, potentially degrading model's performance. In this paper, we propose a task augmentation-based meta-learning method for retinal image segmentation (TAMS) to meet labor-intensive annotation demand. A retinal Lesion Simulation Algorithm (LSA) is proposed to automatically generate multi-class retinal disease datasets with pixel-level segmentation labels, such that metalearning tasks can be augmented without collecting data from various sources. In addition, a novel simulation function library is designed to control generation process and ensure interpretability. Moreover, a generative simulation network (GSNet) with an improved adversarial training strategy is introduced to maintain high-quality representations of complex retinal diseases. TAMS is evaluated on three different OCT and CFP image datasets, and comprehensive experiments have demonstrated that TAMS achieves superior segmentation performance than state-of-the-art models.

Modality-AGnostic Image Cascade (MAGIC) for Multi-Modality Cardiac Substructure Segmentation

Nicholas Summerfield, Qisheng He, Alex Kuo, Ahmed I. Ghanem, Simeng Zhu, Chase Ruff, Joshua Pan, Anudeep Kumar, Prashant Nagpal, Jiwei Zhao, Ming Dong, Carri K. Glide-Hurst

arxiv logopreprintJun 12 2025
Cardiac substructures are essential in thoracic radiation therapy planning to minimize risk of radiation-induced heart disease. Deep learning (DL) offers efficient methods to reduce contouring burden but lacks generalizability across different modalities and overlapping structures. This work introduces and validates a Modality-AGnostic Image Cascade (MAGIC) for comprehensive and multi-modal cardiac substructure segmentation. MAGIC is implemented through replicated encoding and decoding branches of an nnU-Net-based, U-shaped backbone conserving the function of a single model. Twenty cardiac substructures (heart, chambers, great vessels (GVs), valves, coronary arteries (CAs), and conduction nodes) from simulation CT (Sim-CT), low-field MR-Linac, and cardiac CT angiography (CCTA) modalities were manually delineated and used to train (n=76), validate (n=15), and test (n=30) MAGIC. Twelve comparison models (four segmentation subgroups across three modalities) were equivalently trained. All methods were compared for training efficiency and against reference contours using the Dice Similarity Coefficient (DSC) and two-tailed Wilcoxon Signed-Rank test (threshold, p<0.05). Average DSC scores were 0.75(0.16) for Sim-CT, 0.68(0.21) for MR-Linac, and 0.80(0.16) for CCTA. MAGIC outperforms the comparison in 57% of cases, with limited statistical differences. MAGIC offers an effective and accurate segmentation solution that is lightweight and capable of segmenting multiple modalities and overlapping structures in a single model. MAGIC further enables clinical implementation by simplifying the computational requirements and offering unparalleled flexibility for clinical settings.

Simulation-free workflow for lattice radiation therapy using deep learning predicted synthetic computed tomography: A feasibility study.

Zhu L, Yu NY, Ahmed SK, Ashman JB, Toesca DS, Grams MP, Deufel CL, Duan J, Chen Q, Rong Y

pubmed logopapersJun 12 2025
Lattice radiation therapy (LRT) is a form of spatially fractionated radiation therapy that allows increased total dose delivery aiming for improved treatment response without an increase in toxicities, commonly utilized for palliation of bulky tumors. The LRT treatment planning process is complex, while eligible patients often have an urgent need for expedited treatment start. In this study, we aimed to develop a simulation-free workflow for volumetric modulated arc therapy (VMAT)-based LRT planning via deep learning-predicted synthetic CT (sCT) to expedite treatment initiation. Two deep learning models were initially trained using 3D U-Net architecture to generate sCT from diagnostic CTs (dCT) of the thoracic and abdomen regions using a training dataset of 50 patients. The models were then tested on an independent dataset of 15 patients using image similarity analysis assessing mean absolute error (MAE) and structural similarity index measure (SSIM) as metrics. VMAT-based LRT plans were generated based on sCT and recalculated on the planning CT (pCT) for dosimetric accuracy comparison. Differences in dose volume histogram (DVH) metrics between pCT and sCT plans were assessed using the Wilcoxon signed-rank test. The final sCT prediction model demonstrated high image similarity to pCT, with a MAE and SSIM of 38.93 ± 14.79 Hounsfield Units (HU) and 0.92 ± 0.05 for the thoracic region, and 73.60 ± 22.90 HU and 0.90 ± 0.03 for the abdominal region, respectively. There were no statistically significant differences between sCT and pCT plans in terms of organ-at-risk and target volume DVH parameters, including maximum dose (Dmax), mean dose (Dmean), dose delivered to 90% (D90%) and 50% (D50%) of target volume, except for minimum dose (Dmin) and (D10%). With demonstrated high image similarity and adequate dose agreement between sCT and pCT, our study is a proof-of-concept for using deep learning predicted sCT for a simulation-free treatment planning workflow for VMAT-based LRT.
Page 65 of 2252246 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.