Sort by:
Page 47 of 1411408 results

Attention-based deep learning network for predicting World Health Organization meningioma grade and Ki-67 expression based on magnetic resonance imaging.

Cheng X, Li H, Li C, Li J, Liu Z, Fan X, Lu C, Song K, Shen Z, Wang Z, Yang Q, Zhang J, Yin J, Qian C, You Y, Wang X

pubmed logopapersAug 20 2025
Preoperative assessment of World Health Organization (WHO) meningioma grading and Ki-67 expression is crucial for treatment strategies. We aimed to develop a fully automated attention-based deep learning network to predict WHO meningioma grading and Ki-67 expression. This retrospective study included 952 meningioma patients, divided into training (n = 542), internal validation (n = 96), and external test sets (n = 314). For each task, clinical, radiomics, and deep learning models were compared. We used no-new-Unet (nn-Unet) models to construct the segmentation network, followed by four classification models using ResNet50 or Swin Transformer architectures with 2D or 2.5D input strategies. All deep learning models incorporated attention mechanisms. Both the segmentation and 2.5D classification models demonstrated robust performance on the external test set. The segmentation network achieved Dice coefficients of 0.98 (0.97-0.99) and 0.87 (0.83-0.91) for brain parenchyma and tumour segmentation. For predicting meningioma grade, the 2.5D ResNet50 achieved the highest area under the curve (AUC) of 0.90 (0.85-0.93), significantly outperforming the clinical (AUC = 0.77 [0.70-0.83], p < 0.001) and radiomics models (AUC = 0.80 [0.75-0.85], p < 0.001). For Ki-67 expression prediction, the 2.5D Swin Transformer achieved the highest AUC of 0.89 (0.85-0.93), outperforming both the clinical (AUC = 0.76 [0.71-0.81], p < 0.001) and radiomics models (AUC = 0.82 [0.77-0.86], p = 0.002). Our automated deep learning network demonstrated superior performance. This novel network could support more precise treatment planning for meningioma patients. Question Can artificial intelligence accurately assess meningioma WHO grade and Ki-67 expression from preoperative MRI to guide personalised treatment and follow-up strategies? Findings The attention-enhanced nn-Unet segmentation achieved high accuracy, while 2.5D deep learning models with attention mechanisms achieved accurate prediction of grades and Ki-67. Clinical relevance Our fully automated 2.5D deep learning model, enhanced with attention mechanisms, accurately predicts WHO grades and Ki-67 expression levels in meningiomas, offering a robust, objective, and non-invasive solution to support clinical diagnosis and optimise treatment planning.

Unexpected early pulmonary thrombi in war injured patients.

Sasson I, Sorin V, Ziv-Baran T, Marom EM, Czerniawski E, Adam SZ, Aviram G

pubmed logopapersAug 20 2025
Pulmonary embolism is commonly associated with deep vein thrombosis and the components of Virchow's triad: hypercoagulability, stasis, and endothelial injury. High-risk patients are traditionally those with prolonged immobility and hypercoagulability. Recent findings of pulmonary thrombosis (PT) in healthy combat soldiers, found on CT performed for initial trauma assessment, challenge this assumption. The aim of this study was to investigate the prevalence and characteristics of PT detected in acute traumatic war injuries, and evaluate the effectiveness of an artificial intelligence (AI) algorithm in these settings. This retrospective study analyzed immediate post-trauma CT scans of war-injured patients aged 18-45, from two tertiary hospitals between October 7, 2023, and January 7, 2024. Thrombi were retrospectively detected using AI software and confirmed by two senior radiologists. Findings were compared to the original reports. Clinical and injury-related data were analyzed. Of 190 patients (median age 24, IQR (21.0-30.0), 183 males), AI identified 10 confirmed PT patients (5.6%), six (60%) of whom were not originally diagnosed. The only statistically significant difference between PT and non-PT patients was increased complexity and severity of injuries (higher Injury Severity Score, median (IQR) 21.0 (20.0-21.0) vs 9.0 (4.0-14.5), p = 0.01, accordingly). Despite the presence of thrombi, significant right ventricular dilatation was absent in all patients. This report of early PT in war-injured patients provides a unique opportunity to characterize these findings. PT occurs more frequently than anticipated, without clinical suspicion, highlighting the need for improved radiologists' awareness and the crucial role of AI systems as diagnostic support tools. Question What is the prevalence, and what are the radiological characteristics of arterial clotting within the pulmonary arteries in young acute trauma patients? Findings A surprisingly high occurrence of PT with a high rate of missed diagnoses by radiologists. All cases did not presented right ventricular dysfunction. Clinical relevance PT is a distinct clinical entity separate from traditional venous thromboembolism, which raises the need for further investigation of the appropriate treatment paradigm.

Deep Learning Model for Breast Shear Wave Elastography to Improve Breast Cancer Diagnosis (INSPiRED 006): An International, Multicenter Analysis.

Cai L, Pfob A, Barr RG, Duda V, Alwafai Z, Balleyguier C, Clevert DA, Fastner S, Gomez C, Goncalo M, Gruber I, Hahn M, Kapetas P, Nees J, Ohlinger R, Riedel F, Rutten M, Stieber A, Togawa R, Sidey-Gibbons C, Tozaki M, Wojcinski S, Heil J, Golatta M

pubmed logopapersAug 20 2025
Shear wave elastography (SWE) has been investigated as a complement to B-mode ultrasound for breast cancer diagnosis. Although multicenter trials suggest benefits for patients with Breast Imaging Reporting and Data System (BI-RADS) 4(a) breast masses, widespread adoption remains limited because of the absence of validated velocity thresholds. This study aims to develop and validate a deep learning (DL) model using SWE images (artificial intelligence [AI]-SWE) for BI-RADS 3 and 4 breast masses and compare its performance with human experts using B-mode ultrasound. We used data from an international, multicenter trial (ClinicalTrials.gov identifier: NCT02638935) evaluating SWE in women with BI-RADS 3 or 4 breast masses across 12 institutions in seven countries. Images from 11 sites were used to develop an EfficientNetB1-based DL model. An external validation was conducted using data from the 12th site. Another validation was performed using the latest SWE software from a separate institutional cohort. Performance metrics included sensitivity, specificity, false-positive reduction, and area under the receiver operator curve (AUROC). The development set included 924 patients (4,026 images); the external validation sets included 194 patients (562 images) and 176 patients (188 images, latest SWE software). AI-SWE achieved an AUROC of 0.94 (95% CI, 0.91 to 0.96) and 0.93 (95% CI, 0.88 to 0.98) in the two external validation sets. Compared with B-mode ultrasound, AI-SWE significantly reduced false-positive rates by 62.1% (20.4% [30/147] <i>v</i> 53.8% [431/801]; <i>P</i> < .001) and 38.1% (33.3% [14/42] <i>v</i> 53.8% [431/801]; <i>P</i> < .001), with comparable sensitivity (97.9% [46/47] and 97.8% [131/134] <i>v</i> 98.1% [311/317]; <i>P</i> = .912 and <i>P</i> = .810). AI-SWE demonstrated accuracy comparable with human experts in malignancy detection while significantly reducing false-positive imaging findings (ie, unnecessary biopsies). Future studies should explore its integration into multimodal breast cancer diagnostics.

Characterizing the Impact of Training Data on Generalizability: Application in Deep Learning to Estimate Lung Nodule Malignancy Risk.

Obreja B, Bosma J, Venkadesh KV, Saghir Z, Prokop M, Jacobs C

pubmed logopapersAug 20 2025
<i>"Just Accepted" papers have undergone full peer review and have been accepted for publication in <i>Radiology: Artificial Intelligence</i>. This article will undergo copyediting, layout, and proof review before it is published in its final version. Please note that during production of the final copyedited article, errors may be discovered which could affect the content</i>. Purpose To investigate the relationship between training data volume and performance of a deep learning AI algorithm developed to assess the malignancy risk of pulmonary nodules detected on low-dose CT scans in lung cancer screening. Materials and Methods This retrospective study used a dataset of 16077 annotated nodules (1249 malignant, 14828 benign) from the National Lung Screening Trial (NLST) to systematically train an AI algorithm for pulmonary nodule malignancy risk prediction across various stratified subsets ranging from 1.25% to the full dataset. External testing was conducted using data from the Danish Lung Cancer Screening Trial (DLCST) to determine the amount of training data at which the performance of the AI was statistically non-inferior to the AI trained on the full NLST cohort. A size-matched cancer-enriched subset of DLCST, where each malignant nodule had been paired in diameter with the closest two benign nodules, was used to investigate the amount of training data at which the performance of the AI algorithm was statistically non-inferior to the average performance of 11 clinicians. Results The external testing set included 599 participants (mean age 57.65 (SD 4.84) for females and mean age 59.03 (SD 4.94) for males) with 883 nodules (65 malignant, 818 benign). The AI achieved a mean AUC of 0.92 [95% CI: 0.88, 0.96] on the DLCST cohort when trained on the full NLST dataset. Training with 80% of NLST data resulted in non-inferior performance (mean AUC 0.92 [95%CI: 0.89, 0.96], <i>P</i> = .005). On the size-matched DLCST subset (59 malignant, 118 benign), the AI reached non-inferior clinician-level performance (mean AUC 0.82 [95% CI: 0.77, 0.86]) with 20% of the training data (<i>P</i> = .02). Conclusion The deep learning AI algorithm demonstrated excellent performance in assessing pulmonary nodule malignancy risk, achieving clinical level performance with a fraction of the training data and reaching peak performance before utilizing the full dataset. ©RSNA, 2025.

Clinical and Economic Evaluation of a Real-Time Chest X-Ray Computer-Aided Detection System for Misplaced Endotracheal and Nasogastric Tubes and Pneumothorax in Emergency and Critical Care Settings: Protocol for a Cluster Randomized Controlled Trial.

Tsai CL, Chu TC, Wang CH, Chang WT, Tsai MS, Ku SC, Lin YH, Tai HC, Kuo SW, Wang KC, Chao A, Tang SC, Liu WL, Tsai MH, Wang TA, Chuang SL, Lee YC, Kuo LC, Chen CJ, Kao JH, Wang W, Huang CH

pubmed logopapersAug 20 2025
Advancements in artificial intelligence (AI) have driven substantial breakthroughs in computer-aided detection (CAD) for chest x-ray (CXR) imaging. The National Taiwan University Hospital research team previously developed an AI-based emergency CXR system (Capstone project), which led to the creation of a CXR module. This CXR module has an established model supported by extensive research and is ready for application in clinical trials without requiring additional model training. This study will use 3 submodules of the system: detection of misplaced endotracheal tubes, detection of misplaced nasogastric tubes, and identification of pneumothorax. This study aims to apply a real-time CXR CAD system in emergency and critical care settings to evaluate its clinical and economic benefits without requiring additional CXR examinations or altering standard care and procedures. The study will evaluate the impact of CAD system on mortality reduction, postintubation complications, hospital stay duration, workload, and interpretation time, as wells as conduct a cost-effectiveness comparison with standard care. This study adopts a pilot trial and cluster randomized controlled trial design, with random assignment conducted at the ward level. In the intervention group, units are granted access to AI diagnostic results, while the control group continues standard care practices. Consent will be obtained from attending physicians, residents, and advanced practice nurses in each participating ward. Once consent is secured, these health care providers in the intervention group will be authorized to use the CAD system. Intervention units will have access to AI-generated interpretations, whereas control units will maintain routine medical procedures without access to the AI diagnostic outputs. The study was funded in September 2024. Data collection is expected to last from January 2026 to December 2027. This study anticipates that the real-time CXR CAD system will automate the identification and detection of misplaced endotracheal and nasogastric tubes on CXRs, as well as assist clinicians in diagnosing pneumothorax. By reducing the workload of physicians, the system is expected to shorten the time required to detect tube misplacement and pneumothorax, decrease patient mortality and hospital stays, and ultimately lower health care costs. PRR1-10.2196/72928.

Sarcopenia Assessment Using Fully Automated Deep Learning Predicts Cardiac Allograft Survival in Heart Transplant Recipients.

Lang FM, Liu J, Clerkin KJ, Driggin EA, Einstein AJ, Sayer GT, Takeda K, Uriel N, Summers RM, Topkara VK

pubmed logopapersAug 20 2025
Sarcopenia is associated with adverse outcomes in patients with end-stage heart failure. Muscle mass can be quantified via manual segmentation of computed tomography images, but this approach is time-consuming and subject to interobserver variability. We sought to determine whether fully automated assessment of radiographic sarcopenia by deep learning would predict heart transplantation outcomes. This retrospective study included 164 adult patients who underwent heart transplantation between January 2013 and December 2022. A deep learning-based tool was utilized to automatically calculate cross-sectional skeletal muscle area at the T11, T12, and L1 levels on chest computed tomography. Radiographic sarcopenia was defined as skeletal muscle index (skeletal muscle area divided by height squared) in the lowest sex-specific quartile. The study population had a mean age of 53±14 years and was predominantly male (75%) with a nonischemic cause (73%). Mean skeletal muscle index was 28.3±7.6 cm<sup>2</sup>/m<sup>2</sup> for females versus 33.1±8.1 cm<sup>2</sup>/m<sup>2</sup> for males (<i>P</i><0.001). Cardiac allograft survival was significantly lower in heart transplant recipients with versus without radiographic sarcopenia at T11 (90% versus 98% at 1 year, 83% versus 97% at 3 years, log-rank <i>P</i>=0.02). After multivariable adjustment, radiographic sarcopenia at T11 was associated with an increased risk of cardiac allograft loss or death (hazard ratio, 3.86 [95% CI, 1.35-11.0]; <i>P</i>=0.01). Patients with radiographic sarcopenia also had a significantly increased hospital length of stay (28 [interquartile range, 19-33] versus 20 [interquartile range, 16-31] days; <i>P</i>=0.046). Fully automated quantification of radiographic sarcopenia using pretransplant chest computed tomography successfully predicts cardiac allograft survival. By avoiding interobserver variability and accelerating computation, this approach has the potential to improve candidate selection and outcomes in heart transplantation.

Multi-View Echocardiographic Embedding for Accessible AI Development

Tohyama, T., Han, A., Yoon, D., Paik, K., Gow, B., Izath, N., Kpodonu, J., Celi, L. A.

medrxiv logopreprintAug 19 2025
Background and AimsEchocardiography serves as a cornerstone of cardiovascular diagnostics through multiple standardized imaging views. While recent AI foundation models demonstrate superior capabilities across cardiac imaging tasks, their massive computational requirements and reliance on large-scale datasets create accessibility barriers, limiting AI development to well-resourced institutions. Vector embedding approaches offer promising solutions by leveraging compact representations from original medical images for downstream applications. Furthermore, demographic fairness remains critical, as AI models may incorporate biases that confound clinically relevant features. We developed a multi-view encoder framework to address computational accessibility while investigating demographic fairness challenges. MethodsWe utilized the MIMIC-IV-ECHO dataset (7,169 echocardiographic studies) to develop a transformer-based multi-view encoder that aggregates view-level representations into study-level embeddings. The framework incorporated adversarial learning to suppress demographic information while maintaining clinical performance. We evaluated performance across 21 binary classification tasks encompassing echocardiographic measurements and clinical diagnoses, comparing against foundation model baselines with varying adversarial weights. ResultsThe multi-view encoder achieved a mean improvement of 9.0 AUC points (12.0% relative improvement) across clinical tasks compared to foundation model embeddings. Performance remained robust with limited echocardiographic views compared to the conventional approach. However, adversarial learning showed limited effectiveness in reducing demographic shortcuts, with stronger weighting substantially compromising diagnostic performance. ConclusionsOur framework democratizes advanced cardiac AI capabilities, enabling substantial diagnostic improvements without massive computational infrastructure. While algorithmic approaches to demographic fairness showed limitations, the multi-view encoder provides a practical pathway for broader AI adoption in cardiovascular medicine with enhanced efficiency in real-world clinical settings. Structured graphical abstract or graphical abstractO_ST_ABSKey QuestionC_ST_ABSCan multi-view encoder frameworks achieve superior diagnostic performance compared to foundation model embeddings while reducing computational requirements and maintaining robust performance with fewer echocardiographic views for cardiac AI applications? Key FindingMulti-view encoder achieved 12.0% relative improvement (9.0 AUC points) across 21 cardiac tasks compared to foundation model baselines, with efficient 512-dimensional vector embeddings and robust performance using fewer echocardiographic views. Take-home MessageVector embedding approaches with attention-based multi-view integration significantly improve cardiac diagnostic performance while reducing computational requirements, offering a pathway toward more efficient AI implementation in clinical settings. O_FIG O_LINKSMALLFIG WIDTH=200 HEIGHT=83 SRC="FIGDIR/small/25333725v1_ufig1.gif" ALT="Figure 1"> View larger version (22K): [email protected]@a75818org.highwire.dtl.DTLVardef@88a588org.highwire.dtl.DTLVardef@12bad06_HPS_FORMAT_FIGEXP M_FIG C_FIG Translational PerspectiveOur proposed multi-view encoder framework overcomes critical barriers to the widespread adoption of artificial intelligence in echocardiography. By dramatically reducing computational requirements, the multi-view encoder approach allows smaller healthcare institutions to develop sophisticated AI models locally. The framework maintains robust performance with fewer echocardiographic examinations, which addresses real-world clinical constraints where comprehensive imaging is not feasible due to patient factors or time limitations. This technology provides a practical way to democratize advanced cardiac AI capabilities, which could improve access to cardiovascular care across diverse healthcare settings while reducing dependence on proprietary datasets and massive computational resources.

TME-guided deep learning predicts chemotherapy and immunotherapy response in gastric cancer with attention-enhanced residual Swin Transformer.

Sang S, Sun Z, Zheng W, Wang W, Islam MT, Chen Y, Yuan Q, Cheng C, Xi S, Han Z, Zhang T, Wu L, Li W, Xie J, Feng W, Chen Y, Xiong W, Yu J, Li G, Li Z, Jiang Y

pubmed logopapersAug 19 2025
Adjuvant chemotherapy and immune checkpoint blockade exert quite durable anti-tumor responses, but the lack of effective biomarkers limits the therapeutic benefits. Utilizing multi-cohorts of 3,095 patients with gastric cancer, we propose an attention-enhanced residual Swin Transformer network to predict chemotherapy response (main task), and two predicting subtasks (ImmunoScore and periostin [POSTN]) are used as intermediate tasks to improve the model's performance. Furthermore, we assess whether the model can identify which patients would benefit from immunotherapy. The deep learning model achieves high accuracy in predicting chemotherapy response and the tumor microenvironment (ImmunoScore and POSTN). We further find that the model can identify which patient may benefit from checkpoint blockade immunotherapy. This approach offers precise chemotherapy and immunotherapy response predictions, opening avenues for personalized treatment options. Prospective studies are warranted to validate its clinical utility.

CT-based auto-segmentation of multiple target volumes for all-in-one radiotherapy in rectal cancer patients.

Li X, Wang L, Yang M, Li X, Zhao T, Wang M, Lu S, Ji Y, Zhang W, Jia L, Peng R, Wang J, Wang H

pubmed logopapersAug 19 2025
This study aimed to evaluate the clinical feasibility and performance of CT-based auto-segmentation models integrated into an All-in-One radiotherapy workflow for rectal cancer. This study included 312 rectal cancer patients, with 272 used to train three nnU-Net models for CTV45, CTV50, and GTV segmentation, and 40 for evaluation across one internal (<i>n</i> = 10), one clinical AIO (<i>n</i> = 10), and two external cohorts (<i>n</i> = 10 each). Segmentation accuracy (DSC, HD, HD95, ASSD, ASD) and time efficiency were assessed. In the internal testing set, mean DSC of CTV45, CTV50, and GTV were 0.90, 0.86, and 0.71; HD were 17.08, 25.48, and 79.59 mm; HD 95 were 4.89, 7.33, and 56.49 mm; ASSD were 1.23, 1.90, and 6.69 mm; and ASD were 1.24, 1.58, and 11.61 mm. Auto-segmentation reduced manual delineation time by 63.3–88.3% (<i>p</i> < 0.0001). In clinical practice, average DSC of CTV45, CTV50 and GTV were 0.93, 0.88, and 0.78; HD were 13.56, 23.84, and 35.38 mm; HD 95 were 3.33, 6.46, and 21.34 mm; ASSD were 0.78, 1.49, and 3.30 mm; and ASD were 0.74, 1.18, and 2.13 mm. The results from the multi-center testing also showed applicability of these models, since the average DSC of CTV45 and GTV were 0.84 and 0.80 respectively. The models demonstrated high accuracy and clinical utility, effectively streamlining target volume delineation and reducing manual workload in routine practice. The study protocol was approved by the Institutional Review Board of Peking University Third Hospital (Approval No. (2024) Medical Ethics Review No. 182-01).

A Cardiac-specific CT Foundation Model for Heart Transplantation

Xu, H., Woicik, A., Asadian, S., Shen, J., Zhang, Z., Nabipoor, A., Musi, J. P., Keenan, J., Khorsandi, M., Al-Alao, B., Dimarakis, I., Chalian, H., Lin, Y., Fishbein, D., Pal, J., Wang, S., Lin, S.

medrxiv logopreprintAug 19 2025
Heart failure is a major cause of morbitidy and mortality, with the severest forms requiring heart transplantation. Heart size matching between the donor and recipient is a critical step in ensuring a successful transplantation. Currently, a set of equations based on population measures of height, weight, sex and age, viz. predicted heart mass (PHM), are used but can be improved upon by personalized information from recipient and donor chest CT images. Here, we developed GigaHeart, the first heart-specific foundation model pretrained on 180,897 chest CT volumes from 56,607 patients. The key idea of GigaHeart is to direct the foundation models attention towards the heart by contrasting the heart region and the entire chest, thereby encouraging the model to capture fine-grained cardiac features. GigaHeart achieves the best performance on 8 cardiac-specific classification tasks and further, exhibits superior performance on cross-modal tasks by jointly modeling CT images and reports. We similarly developed a thorax-specific foundation model and observed promising performance on 9 thorax-specific tasks, indicating the potential to extend GigaHeart to other organ-specific foundation models. More importantly, GigaHeart addresses the heart sizing problem. It avoids oversizing by correctly segmenting the sizes of hearts of donors and recipients. In regressions against actual heart masses, our AI-segmented total cardiac volumes (TCVs) has a 33.3% R2 improvement when compared to PHM. Meanwhile, GigaHeart also solves the undersizing problem by adding a regression layer to the model. Specifically, GigaHeart reduces the mean squared error by 57% against PHM. In total, we show that GigaHeart increases the acceptable range of donor heart sizes and matches more accurately than the widely used PHM equations. In all, GigaHeart is a state-of-the-art, cardiac-specific foundation model with the key innovation of directing the models attention to the heart. GigaHeart can be finetuned for accomplishing a number of tasks accurately, of which AI-assisted heart sizing is a novel example.
Page 47 of 1411408 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.