Sort by:
Page 128 of 3543538 results

Deep Learning in Myocarditis: A Novel Approach to Severity Assessment

Nishimori, M., Otani, T., Asaumi, Y., Ohta-Ogo, K., Ikeda, Y., Amemiya, K., Noguchi, T., Izumi, C., Shinohara, M., Hatakeyama, K., Nishimura, K.

medrxiv logopreprintAug 2 2025
BackgroundMyocarditis is a life-threatening disease with significant hemodynamic risks during the acute phase. Although histopathological examination of myocardial biopsy specimens remains the gold standard for diagnosis, there is no established method for objectively quantifying cardiomyocyte damage. We aimed to develop an AI model to evaluate clinical myocarditis severity using comprehensive pathology data. MethodsWe retrospectively analyzed 314 patients (1076 samples) who underwent myocardial biopsy from 2002 to 2021 at the National Cerebrovascular Center. Among these patients, 158 were diagnosed with myocarditis based on the Dallas criteria. A Multiple Instance Learning (MIL) model served as a pre-trained classifier to detect myocarditis across whole-slide images. We then constructed two clinical severity-prediction models: (1) a logistic regression model (Model 1) using the density of inflammatory cells per unit area, and (2) a Transformer-based model (Model 2), which processed the top-ranked patches identified by the MIL model to predict clinical severe outcomes. ResultsModel 1 achieved an AUROC of 0.809, indicating a robust association between inflammatory cell density and severe myocarditis. In contrast, Model 2, the Transformer-based approach, yielded an AUROC of 0.993 and demonstrated higher accuracy and precision for severity prediction. Attention score visualizations showed that Model 2 captured both inflammatory cell infiltration and additional morphological features. These findings suggest that combining MIL with Transformer architectures enables more comprehensive identification of key histological markers associated with clinical severe disease. ConclusionsOur results highlight that a Transformer-based AI model analyzing whole-slide pathology images can accurately assess clinical myocarditis severity. Moreover, simply quantifying the extent of inflammatory cell infiltration also correlates strongly with clinical outcomes. These methods offer a promising avenue for improving diagnostic precision, guiding treatment decisions, and ultimately enhancing patient management. Future prospective studies are warranted to validate these models in broader clinical settings and facilitate their integration into routine pathological workflows. What is new?- This is the first study to apply an AI model for the diagnosis and severity assessment of myocarditis. - New evidence shows that inflammatory cell infiltration is related to the severity of myocarditis. - Using information from the entire tissue, not just inflammatory cells, allows for a more accurate assessment of myocarditis severity. What are the clinical implications?- The use of the AI model allows for an unprecedented histological evaluation of myocarditis severity, which can enhance early diagnosis and intervention strategies. - Rapid and precise assessments of myocarditis severity by the AI model can support clinicians in making timely and appropriate treatment decisions, potentially improving patient outcomes. - The incorporation of this AI model into clinical practice may streamline diagnostic workflows and optimize the allocation of medical resources, enhancing overall patient care.

Brain Age Prediction: Deep Models Need a Hand to Generalize.

Rajabli R, Soltaninejad M, Fonov VS, Bzdok D, Collins DL

pubmed logopapersAug 1 2025
Predicting brain age from T1-weighted MRI is a promising marker for understanding brain aging and its associated conditions. While deep learning models have shown success in reducing the mean absolute error (MAE) of predicted brain age, concerns about robust and accurate generalization in new data limit their clinical applicability. The large number of trainable parameters, combined with limited medical imaging training data, contributes to this challenge, often resulting in a generalization gap where there is a significant discrepancy between model performance on training data versus unseen data. In this study, we assess a deep model, SFCN-reg, based on the VGG-16 architecture, and address the generalization gap through comprehensive preprocessing, extensive data augmentation, and model regularization. Using training data from the UK Biobank, we demonstrate substantial improvements in model performance. Specifically, our approach reduces the generalization MAE by 47% (from 5.25 to 2.79 years) in the Alzheimer's Disease Neuroimaging Initiative dataset and by 12% (from 4.35 to 3.75 years) in the Australian Imaging, Biomarker and Lifestyle dataset. Furthermore, we achieve up to 13% reduction in scan-rescan error (from 0.80 to 0.70 years) while enhancing the model's robustness to registration errors. Feature importance maps highlight anatomical regions used to predict age. These results highlight the critical role of high-quality preprocessing and robust training techniques in improving accuracy and narrowing the generalization gap, both necessary steps toward the clinical use of brain age prediction models. Our study makes valuable contributions to neuroimaging research by offering a potential pathway to improve the clinical applicability of deep learning models.

Moving Beyond CT Body Composition Analysis: Using Style Transfer for Bringing CT-Based Fully-Automated Body Composition Analysis to T2-Weighted MRI Sequences.

Haubold J, Pollok OB, Holtkamp M, Salhöfer L, Schmidt CS, Bojahr C, Straus J, Schaarschmidt BM, Borys K, Kohnke J, Wen Y, Opitz M, Umutlu L, Forsting M, Friedrich CM, Nensa F, Hosch R

pubmed logopapersAug 1 2025
Deep learning for body composition analysis (BCA) is gaining traction in clinical research, offering rapid and automated ways to measure body features like muscle or fat volume. However, most current methods prioritize computed tomography (CT) over magnetic resonance imaging (MRI). This study presents a deep learning approach for automatic BCA using MR T2-weighted sequences. Initial BCA segmentations (10 body regions and 4 body parts) were generated by mapping CT segmentations from body and organ analysis (BOA) model to synthetic MR images created using an in-house trained CycleGAN. In total, 30 synthetic data pairs were used to train an initial nnU-Net V2 in 3D, and this preliminary model was then applied to segment 120 real T2-weighted MRI sequences from 120 patients (46% female) with a median age of 56 (interquartile range, 17.75), generating early segmentation proposals. These proposals were refined by human annotators, and nnU-Net V2 2D and 3D models were trained using 5-fold cross-validation on this optimized dataset of real MR images. Performance was evaluated using Sørensen-Dice, Surface Dice, and Hausdorff Distance metrics including 95% confidence intervals for cross-validation and ensemble models. The 3D ensemble segmentation model achieved the highest Dice scores for the body region classes: bone 0.926 (95% confidence interval [CI], 0.914-0.937), muscle 0.968 (95% CI, 0.961-0.975), subcutaneous fat 0.98 (95% CI, 0.971-0.986), nervous system 0.973 (95% CI, 0.965-0.98), thoracic cavity 0.978 (95% CI, 0.969-0.984), abdominal cavity 0.989 (95% CI, 0.986-0.991), mediastinum 0.92 (95% CI, 0.901-0.936), pericardium 0.945 (95% CI, 0.924-0.96), brain 0.966 (95% CI, 0.927-0.989), and glands 0.905 (95% CI, 0.886-0.921). Furthermore, body part 2D ensemble model reached the highest Dice scores for all labels: arms 0.952 (95% CI, 0.937-0.965), head + neck 0.965 (95% CI, 0.953-0.976), legs 0.978 (95% CI, 0.968-0.988), and torso 0.99 (95% CI, 0.988-0.991). The overall average Dice across body parts (2D = 0.971, 3D = 0.969, P = ns) and body regions (2D = 0.935, 3D = 0.955, P < 0.001) ensemble models indicates stable performance across all classes. The presented approach facilitates efficient and automated extraction of BCA parameters from T2-weighted MRI sequences, providing precise and detailed body composition information across various regions and body parts.

Deep Learning-Based Signal Amplification of T1-Weighted Single-Dose Images Improves Metastasis Detection in Brain MRI.

Haase R, Pinetz T, Kobler E, Bendella Z, Zülow S, Schievelkamp AH, Schmeel FC, Panahabadi S, Stylianou AM, Paech D, Foltyn-Dumitru M, Wagner V, Schlamp K, Heussel G, Holtkamp M, Heussel CP, Vahlensieck M, Luetkens JA, Schlemmer HP, Haubold J, Radbruch A, Effland A, Deuschl C, Deike K

pubmed logopapersAug 1 2025
Double-dose contrast-enhanced brain imaging improves tumor delineation and detection of occult metastases but is limited by concerns about gadolinium-based contrast agents' effects on patients and the environment. The purpose of this study was to test the benefit of a deep learning-based contrast signal amplification in true single-dose T1-weighted (T-SD) images creating artificial double-dose (A-DD) images for metastasis detection in brain magnetic resonance imaging. In this prospective, multicenter study, a deep learning-based method originally trained on noncontrast, low-dose, and T-SD brain images was applied to T-SD images of 30 participants (mean age ± SD, 58.5 ± 11.8 years; 23 women) acquired externally between November 2022 and June 2023. Four readers with different levels of experience independently reviewed T-SD and A-DD images for metastases with 4 weeks between readings. A reference reader reviewed additionally acquired true double-dose images to determine any metastases present. Performances were compared using Mid-p McNemar tests for sensitivity and Wilcoxon signed rank tests for false-positive findings. All readers found more metastases using A-DD images. The 2 experienced neuroradiologists achieved the same level of sensitivity using T-SD images (62 of 91 metastases, 68.1%). While the increase in sensitivity using A-DD images was only descriptive for 1 of them (A-DD: 65 of 91 metastases, +3.3%, P = 0.424), the second neuroradiologist benefited significantly with a sensitivity increase of 12.1% (73 of 91 metastases, P = 0.008). The 2 less experienced readers (1 resident and 1 fellow) both found significantly more metastases on A-DD images (resident, T-SD: 61.5%, A-DD: 68.1%, P = 0.039; fellow, T-SD: 58.2%, A-DD: 70.3%, P = 0.008). They were therefore able to use A-DD images to increase their sensitivity to the neuroradiologists' initial level on regular T-SD images. False-positive findings did not differ significantly between sequences. However, readers showed descriptively more false-positive findings on A-DD images. The benefit in sensitivity particularly applied to metastases ≤5 mm (5.7%-17.3% increase in sensitivity). A-DD images can improve the detectability of brain metastases without a significant loss of precision and could therefore represent a potentially valuable addition to regular single-dose brain imaging.

First comparison between artificial intelligence-guided coronary computed tomography angiography versus single-photon emission computed tomography testing for ischemia in clinical practice.

Cho GW, Sayed S, D'Costa Z, Karlsberg DW, Karlsberg RP

pubmed logopapersAug 1 2025
Noninvasive cardiac testing with coronary computed tomography angiography (CCTA) and single-photon emission computed tomography (SPECT) are becoming alternatives to invasive angiography for the evaluation of obstructive coronary artery disease. We aimed to evaluate whether a novel artificial intelligence (AI)-assisted CCTA program is comparable to SPECT imaging for ischemic testing. CCTA images were analyzed using an artificial intelligence convolutional neural network machine-learning-based model, atherosclerosis imaging-quantitative computed tomography (AI-QCT) ISCHEMIA . A total of 183 patients (75 females and 108 males, with an average age of 60.8 years ± 12.3 years) were selected. All patients underwent AI-QCT ISCHEMIA -augmented CCTA, with 60 undergoing concurrent SPECT and 16 having invasive coronary angiograms. Eight studies were excluded from analysis due to incomplete data or coronary anomalies.  A total of 175 patients (95%) had CCTA performed, deemed acceptable for AI-QCT ISCHEMIA interpretation. Compared to invasive angiography, AI-QCT ISCHEMIA -driven CCTA showed a sensitivity of 75% and specificity of 70% for predicting coronary ischemia, versus 70% and 53%, respectively for SPECT. The negative predictive value was high for female patients when using AI-QCT ISCHEMIA compared to SPECT (91% vs. 68%, P  = 0.042). Area under the receiver operating characteristic curves were similar between both modalities (0.81 for AI-CCTA, 0.75 for SPECT, P  = 0.526). When comparing both modalities, the correlation coefficient was r  = 0.71 ( P  < 0.04). AI-powered CCTA is a viable alternative to SPECT for detecting myocardial ischemia in patients with low- to intermediate-risk coronary artery disease, with significant positive and negative correlation in results. For patients who underwent confirmatory invasive angiography, the results of AI-CCTA and SPECT imaging were comparable. Future research focusing on prospective studies involving larger and more diverse patient populations is warranted to further investigate the benefits offered by AI-driven CCTA.

Deep Learning Reconstruction Combined With Conventional Acceleration Improves Image Quality of 3 T Brain MRI and Does Not Impact Quantitative Diffusion Metrics.

Wilpert C, Russe MF, Weiss J, Voss C, Rau S, Strecker R, Reisert M, Bedin R, Urbach H, Zaitsev M, Bamberg F, Rau A

pubmed logopapersAug 1 2025
Deep learning reconstruction of magnetic resonance imaging (MRI) allows to either improve image quality of accelerated sequences or to generate high-resolution data. We evaluated the interaction of conventional acceleration and Deep Resolve Boost (DRB)-based reconstruction techniques of a single-shot echo-planar imaging (ssEPI) diffusion-weighted imaging (DWI) on image quality features in cerebral 3 T brain MRI and compared it with a state-of-the-art DWI sequence. In this prospective study, 24 patients received a standard of care ssEPI DWI and 5 additional adapted ssEPI DWI sequences, 3 of those with DRB reconstruction. Qualitative analysis encompassed rating of image quality, noise, sharpness, and artifacts. Quantitative analysis compared apparent diffusion coefficient (ADC) values region-wise between the different DWI sequences. Intraclass correlations, paired sampled t test, Wilcoxon signed rank test, and weighted Cohen κ were used. Compared with the reference standard, the acquisition time was significantly improved in accelerated DWI from 75 seconds up to 50% (39 seconds; P < 0.001). All tested DRB-reconstructed sequences showed significantly improved image quality, sharpness, and reduced noise ( P < 0.001). Highest image quality was observed for the combination of conventional acceleration and DL reconstruction. In singular slices, more artifacts were observed for DRB-reconstructed sequences ( P < 0.001). While in general high consistency was found between ADC values, increasing differences in ADC values were noted with increasing acceleration and application of DRB. Falsely pathological ADCs were rarely observed near frontal poles and optic chiasm attributable to susceptibility-related artifacts due to adjacent sinuses. In this comparative study, we found that the combination of conventional acceleration and DRB reconstruction improves image quality and enables faster acquisition of ssEPI DWI. Nevertheless, a tradeoff between increased acceleration with risk of stronger artifacts and high-resolution with longer acquisition time needs to be considered, especially for application in cerebral MRI.

Multimodal multiphasic preoperative image-based deep-learning predicts HCC outcomes after curative surgery.

Hui RW, Chiu KW, Lee IC, Wang C, Cheng HM, Lu J, Mao X, Yu S, Lam LK, Mak LY, Cheung TT, Chia NH, Cheung CC, Kan WK, Wong TC, Chan AC, Huang YH, Yuen MF, Yu PL, Seto WK

pubmed logopapersAug 1 2025
HCC recurrence frequently occurs after curative surgery. Histological microvascular invasion (MVI) predicts recurrence but cannot provide preoperative prognostication, whereas clinical prediction scores have variable performances. Recurr-NET, a multimodal multiphasic residual-network random survival forest deep-learning model incorporating preoperative CT and clinical parameters, was developed to predict HCC recurrence. Preoperative triphasic CT scans were retrieved from patients with resected histology-confirmed HCC from 4 centers in Hong Kong (internal cohort). The internal cohort was randomly divided in an 8:2 ratio into training and internal validation. External testing was performed in an independent cohort from Taiwan.Among 1231 patients (age 62.4y, 83.1% male, 86.8% viral hepatitis, and median follow-up 65.1mo), cumulative HCC recurrence rates at years 2 and 5 were 41.8% and 56.4%, respectively. Recurr-NET achieved excellent accuracy in predicting recurrence from years 1 to 5 (internal cohort AUROC 0.770-0.857; external AUROC 0.758-0.798), significantly outperforming MVI (internal AUROC 0.518-0.590; external AUROC 0.557-0.615) and multiple clinical risk scores (ERASL-PRE, ERASL-POST, DFT, and Shim scores) (internal AUROC 0.523-0.587, external AUROC: 0.524-0.620), respectively (all p < 0.001). Recurr-NET was superior to MVI in stratifying recurrence risks at year 2 (internal: 72.5% vs. 50.0% in MVI; external: 65.3% vs. 46.6% in MVI) and year 5 (internal: 86.4% vs. 62.5% in MVI; external: 81.4% vs. 63.8% in MVI) (all p < 0.001). Recurr-NET was also superior to MVI in stratifying liver-related and all-cause mortality (all p < 0.001). The performance of Recurr-NET remained robust in subgroup analyses. Recurr-NET accurately predicted HCC recurrence, outperforming MVI and clinical prediction scores, highlighting its potential in preoperative prognostication.

M4CXR: Exploring Multitask Potentials of Multimodal Large Language Models for Chest X-Ray Interpretation.

Park J, Kim S, Yoon B, Hyun J, Choi K

pubmed logopapersAug 1 2025
The rapid evolution of artificial intelligence, especially in large language models (LLMs), has significantly impacted various domains, including healthcare. In chest X-ray (CXR) analysis, previous studies have employed LLMs, but with limitations: either underutilizing the LLMs' capability for multitask learning or lacking clinical accuracy. This article presents M4CXR, a multimodal LLM designed to enhance CXR interpretation. The model is trained on a visual instruction-following dataset that integrates various task-specific datasets in a conversational format. As a result, the model supports multiple tasks such as medical report generation (MRG), visual grounding, and visual question answering (VQA). M4CXR achieves state-of-the-art clinical accuracy in MRG by employing a chain-of-thought (CoT) prompting strategy, in which it identifies findings in CXR images and subsequently generates corresponding reports. The model is adaptable to various MRG scenarios depending on the available inputs, such as single-image, multiimage, and multistudy contexts. In addition to MRG, M4CXR performs visual grounding at a level comparable to specialized models and demonstrates outstanding performance in VQA. Both quantitative and qualitative assessments reveal M4CXR's versatility in MRG, visual grounding, and VQA, while consistently maintaining clinical accuracy.

Deep learning model for automated segmentation of sphenoid sinus and middle skull base structures in CBCT volumes using nnU-Net v2.

Gülşen İT, Kuran A, Evli C, Baydar O, Dinç Başar K, Bilgir E, Çelik Ö, Bayrakdar İŞ, Orhan K, Acu B

pubmed logopapersAug 1 2025
The purpose of this study is the development of a deep learning model based on nnU-Net v2 for the automated segmentation of sphenoid sinus and middle skull base anatomic structures in cone-beam computed tomography (CBCT) volumes, followed by an evaluation of the model's performance. In this retrospective study, the sphenoid sinus and surrounding anatomical structures in 99 CBCT scans were annotated using web-based labeling software. Model training was conducted using the nnU-Net v2 deep learning model with a learning rate of 0.01 for 1000 epochs. The performance of the model in automatically segmenting these anatomical structures in CBCT scans was evaluated using a series of metrics, including accuracy, precision, recall, dice coefficient (DC), 95% Hausdorff distance (95% HD), intersection on union (IoU), and AUC. The developed deep learning model demonstrated a high level of success in segmenting sphenoid sinus, foramen rotundum, and Vidian canal. Upon evaluation of the DC values, it was observed that the model demonstrated the highest degree of ability to segment the sphenoid sinus, with a DC value of 0.96. The nnU-Net v2-based deep learning model achieved high segmentation performance for the sphenoid sinus, foramen rotundum, and Vidian canal within the middle skull base, with the highest DC observed for the sphenoid sinus (DC: 0.96). However, the model demonstrated limited performance in segmenting other foramina of the middle skull base, indicating the need for further optimization for these structures.
Page 128 of 3543538 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.