Sort by:
Page 106 of 3433427 results

Identifying signatures of image phenotypes to track treatment response in liver disease.

Perkonigg M, Bastati N, Ba-Ssalamah A, Mesenbrink P, Goehler A, Martic M, Zhou X, Trauner M, Langs G

pubmed logopapersJul 21 2025
Quantifiable image patterns associated with disease progression and treatment response are critical tools for guiding individual treatment, and for developing novel therapies. Here, we show that unsupervised machine learning can identify a pattern vocabulary of liver tissue in magnetic resonance images that quantifies treatment response in diffuse liver disease. Deep clustering networks simultaneously encode and cluster patches of medical images into a low-dimensional latent space to establish a tissue vocabulary. The resulting tissue types capture differential tissue change and its location in the liver associated with treatment response. We demonstrate the utility of the vocabulary in a randomized controlled trial cohort of patients with nonalcoholic steatohepatitis. First, we use the vocabulary to compare longitudinal liver change in a placebo and a treatment cohort. Results show that the method identifies specific liver tissue change pathways associated with treatment and enables a better separation between treatment groups than established non-imaging measures. Moreover, we show that the vocabulary can predict biopsy derived features from non-invasive imaging data. We validate the method in a separate replication cohort to demonstrate the applicability of the proposed method.

[A multi-feature fusion-based model for fetal orientation classification from intrapartum ultrasound videos].

Zheng Z, Yang X, Wu S, Zhang S, Lyu G, Liu P, Wang J, He S

pubmed logopapersJul 20 2025
To construct an intelligent analysis model for classifying fetal orientation during intrapartum ultrasound videos based on multi-feature fusion. The proposed model consists of the Input, Backbone Network and Classification Head modules. The Input module carries out data augmentation to improve the sample quality and generalization ability of the model. The Backbone Network was responsible for feature extraction based on Yolov8 combined with CBAM, ECA, PSA attention mechanism and AIFI feature interaction module. The Classification Head consists of a convolutional layer and a softmax function to output the final probability value of each class. The images of the key structures (the eyes, face, head, thalamus, and spine) were annotated with frames by physicians for model training to improve the classification accuracy of the anterior occipital, posterior occipital, and transverse occipital orientations. The experimental results showed that the proposed model had excellent performance in the tire orientation classification task with the classification accuracy reaching 0.984, an area under the PR curve (average accuracy) of 0.993, and area under the ROC curve of 0.984, and a kappa consistency test score of 0.974. The prediction results by the deep learning model were highly consistent with the actual classification results. The multi-feature fusion model proposed in this study can efficiently and accurately classify fetal orientation in intrapartum ultrasound videos.

Results from a Swedish model-based analysis of the cost-effectiveness of AI-assisted digital mammography.

Lyth J, Gialias P, Husberg M, Bernfort L, Bjerner T, Wiberg MK, Levin LÅ, Gustafsson H

pubmed logopapersJul 19 2025
To evaluate the cost-effectiveness of AI-assisted digital mammography (AI-DM) compared to conventional biennial breast cancer digital mammography screening (cDM) with double reading of screening mammograms, and to investigate the change in cost-effectiveness based on four different sub-strategies of AI-DM. A decision-analytic state-transition Markov model was used to analyse the decision of whether to use cDM or AI-DM in breast cancer screening. In this Markov model, one-year cycles were used, and the analysis was performed from a healthcare perspective with a lifetime horizon. In the model, we analysed 1000 hypothetical individuals attending mammography screenings assessed with AI-DM compared with 1000 hypothetical individuals assessed with cDM. The total costs, including both screening-related costs and breast cancer-related costs, were €3,468,967 and €3,528,288 for AI-DM and cDM, respectively. AI-DM resulted in a cost saving of €59,320 compared to cDM. Per 1000 individuals, AI-DM gained 10.8 quality-adjusted life years (QALYs) compared to cDM. Gained QALYs at a lower cost means that the AI-DM screening strategy was dominant compared to cDM. Break-even occurred at the second screening at age 42 years. This analysis showed that AI-assisted mammography for biennial breast cancer screening in a Swedish population of women aged 40-74 years is a cost-saving strategy compared to a conventional strategy using double human screen reading. Further clinical studies are needed, as scenario analyses showed that other strategies, more dependent on AI, are also cost-saving. Question To evaluate the cost-effectiveness of AI-DM in comparison to conventional biennial breast cDM screening. Findings AI-DM is cost-effective, and the break-even point occurred at the second screening at age 42 years. Clinical relevance The implementation of AI is clearly cost-effective as it reduces the total cost for the healthcare system and simultaneously results in a gain in QALYs.

Medical radiology report generation: A systematic review of current deep learning methods, trends, and future directions.

Izhar A, Idris N, Japar N

pubmed logopapersJul 19 2025
Medical radiology reports play a crucial role in diagnosing various diseases, yet generating them manually is time-consuming and burdens clinical workflows. Medical radiology report generation aims to automate this process using deep learning to assist radiologists and reduce patient wait times. This study presents the most comprehensive systematic review to date on deep learning-based MRRG, encompassing recent advances that span traditional architectures to large language models. We focus on available datasets, modeling approaches, and evaluation practices. Following PRISMA guidelines, we retrieved 323 articles from major academic databases and included 78 studies after eligibility screening. We critically analyze key components such as model architectures, loss functions, datasets, evaluation metrics, and optimizers - identifying 22 widely used datasets, 14 evaluation metrics, around 20 loss functions, over 25 visual backbones, and more than 30 textual backbones. To support reproducibility and accelerate future research, we also compile links to modern models, toolkits, and pretrained resources. Our findings provide technical insights and outline future directions to address current limitations, promoting collaboration at the intersection of medical imaging, natural language processing, and deep learning to advance trustworthy AI systems in radiology.

Latent Class Analysis Identifies Distinct Patient Phenotypes Associated With Mistaken Treatment Decisions and Adverse Outcomes in Coronary Artery Disease.

Qi J, Wang Z, Ma X, Wang Z, Li Y, Yang L, Shi D, Zhou Y

pubmed logopapersJul 19 2025
This study aimed to identify patient characteristics linked to mistaken treatments and major adverse cardiovascular events (MACE) in percutaneous coronary intervention (PCI) for coronary artery disease (CAD) using deep learning-based fractional flow reserve (DEEPVESSEL-FFR, DVFFR). A retrospective cohort of 3,840 PCI patients was analyzed using latent class analysis (LCA) based on eight factors. Mistaken treatment was defined as negative DVFFR patients undergoing revascularization or positive DVFFR patients not receiving it. MACE included all-cause mortality, rehospitalization for unstable angina, and non-fatal myocardial infarction. Patients were classified into comorbidities (Class 1), smoking-drinking (Class 2), and relatively healthy (Class 3) groups. Mistaken treatment was highest in Class 2 (15.4% vs. 6.7%, <i>P</i> < .001), while MACE was highest in Class 1 (7.0% vs. 4.8%, <i>P</i> < .001). Adjusted analyses showed increased mistaken treatment risk in Class 1 (OR 1.96; 95% CI 1.49-2.57) and Class 2 (OR 1.69; 95% CI 1.28-2.25) compared with Class 3. Class 1 also had higher MACE risk (HR 1.53; 95% CI 1.10-2.12). In conclusion, comorbidities and smoking-drinking classes had higher mistaken treatment and MACE risks compared with the relatively healthy class.

Emerging Role of MRI-Based Artificial Intelligence in Individualized Treatment Strategies for Hepatocellular Carcinoma: A Narrative Review.

Che F, Zhu J, Li Q, Jiang H, Wei Y, Song B

pubmed logopapersJul 19 2025
Hepatocellular carcinoma (HCC) is the most common subtype of primary liver cancer, with significant variability in patient outcomes even within the same stage according to the Barcelona Clinic Liver Cancer staging system. Accurately predicting patient prognosis and potential treatment response prior to therapy initiation is crucial for personalized clinical decision-making. This review focuses on the application of artificial intelligence (AI) in magnetic resonance imaging for guiding individualized treatment strategies in HCC management. Specifically, we emphasize AI-based tools for pre-treatment prediction of therapeutic response and prognosis. AI techniques such as radiomics and deep learning have shown strong potential in extracting high-dimensional imaging features to characterize tumors and liver parenchyma, predict treatment outcomes, and support prognostic stratification. These advances contribute to more individualized and precise treatment planning. However, challenges remain in model generalizability, interpretability, and clinical integration, highlighting the need for standardized imaging datasets and multi-omics fusion to fully realize the potential of AI in personalized HCC care. Evidence level: 5. Technical efficacy: 4.

Automated Quantitative Evaluation of Age-Related Thymic Involution on Plain Chest CT.

Okamura YT, Endo K, Toriihara A, Fukuda I, Isogai J, Sato Y, Yasuoka K, Kagami SI

pubmed logopapersJul 19 2025
The thymus is an important immune organ involved in T-cell generation. Age-related involution of the thymus has been linked to various age-related pathologies in recent studies. However, there has been no method proposed to quantify age-related thymic involution based on a clinical image. The purpose of this study was to establish an objective and automatic method to quantify age-related thymic involution based on plain chest computed tomography (CT) images. We newly defined the thymic region for quantification (TRQ) as the target anatomical region. We manually segmented the TRQ in 135 CT studies, followed by construction of segmentation neural network (NN) models using the data. We developed the estimator of thymic volume (ETV), a quantitative indicator of the thymic tissue volume inside the segmented TRQ, based on simple mathematical modeling. The Hounsfield unit (HU) value and volume of the NN-segmented TRQ were measured, and the ETV was calculated in each CT study from 853 healthy subjects. We investigated how these measures were related to age and sex using quantile additive regression models. A significant correlation between the NN-segmented and manually segmented TRQ was seen for both the HU value and volume (r = 0.996 and r = 0.986, respectively). ETV declined exponentially with age (p < 0.001), consistent with age-related decline in the thymic tissue volume. In conclusion, our method enabled robust quantification of age-related thymic involution. Our method may aid in the prediction and risk classification of pathologies related to thymic involution.

A novel hybrid convolutional and transformer network for lymphoma classification.

Sikkandar MY, Sundaram SG, Almeshari MN, Begum SS, Sankari ES, Alduraywish YA, Obidallah WJ, Alotaibi FM

pubmed logopapersJul 19 2025
Lymphoma poses a critical health challenge worldwide, demanding computer aided solutions towards diagnosis, treatment, and research to significantly enhance patient outcomes and combat this pervasive disease. Accurate classification of lymphoma subtypes from Whole Slide Images (WSIs) remains a complex challenge due to morphological similarities among subtypes and the limitations of models that fail to jointly capture local and global features. Traditional diagnostic methods, limited by subjectivity and inconsistencies, highlight the need for advanced, Artificial Intelligence (AI)-driven solutions. This study proposes a hybrid deep learning framework-Hybrid Convolutional and Transformer Network for Lymphoma Classification (HCTN-LC)-designed to enhance the precision and interpretability of lymphoma subtype classification. The model employs a dual-pathway architecture that combines a lightweight SqueezeNet for local feature extraction with a Vision Transformer (ViT) for capturing global context. A Feature Fusion and Enhancement Module (FFEM) is introduced to dynamically integrate features from both pathways. The model is trained and evaluated on a large WSI dataset encompassing three lymphoma subtypes: CLL, FL, and MCL. HCTN-LC achieves superior performance with an overall accuracy of 99.87%, sensitivity of 99.87%, specificity of 99.93%, and AUC of 0.9991, outperforming several recent hybrid models. Grad-CAM visualizations confirm the model's focus on diagnostically relevant regions. The proposed HCTN-LC demonstrates strong potential for real-time and low-resource clinical deployment, offering a robust and interpretable AI tool for hematopathological diagnosis.

Enhancing cardiac disease detection via a fusion of machine learning and medical imaging.

Yu T, Chen K

pubmed logopapersJul 19 2025
Cardiovascular illnesses continue to be a predominant cause of mortality globally, underscoring the necessity for prompt and precise diagnosis to mitigate consequences and healthcare expenditures. This work presents a complete hybrid methodology that integrates machine learning techniques with medical image analysis to improve the identification of cardiovascular diseases. This research integrates many imaging modalities such as echocardiography, cardiac MRI, and chest radiographs with patient health records, enhancing diagnosis accuracy beyond standard techniques that depend exclusively on numerical clinical data. During the preprocessing phase, essential visual elements are collected from medical pictures utilizing image processing methods and convolutional neural networks (CNNs). These are subsequently integrated with clinical characteristics and input into various machine learning classifiers, including Support Vector Machines (SVM), Random Forest (RF), XGBoost, and Deep Neural Networks (DNNs), to differentiate between healthy persons and patients with cardiovascular illnesses. The proposed method attained a remarkable diagnostic accuracy of up to 96%, exceeding models reliant exclusively on clinical data. This study highlights the capability of integrating artificial intelligence with medical imaging to create a highly accurate and non-invasive diagnostic instrument for cardiovascular disease.

Influence of high-performance image-to-image translation networks on clinical visual assessment and outcome prediction: utilizing ultrasound to MRI translation in prostate cancer.

Salmanpour MR, Mousavi A, Xu Y, Weeks WB, Hacihaliloglu I

pubmed logopapersJul 19 2025
Image-to-image (I2I) translation networks have emerged as promising tools for generating synthetic medical images; however, their clinical reliability and ability to preserve diagnostically relevant features remain underexplored. This study evaluates the performance of state-of-the-art 2D/3D I2I networks for converting ultrasound (US) images to synthetic MRI in prostate cancer (PCa) imaging. The novelty lies in combining radiomics, expert clinical evaluation, and classification performance to comprehensively benchmark these models for potential integration into real-world diagnostic workflows. A dataset of 794 PCa patients was analyzed using ten leading I2I networks to synthesize MRI from US input. Radiomics feature (RF) analysis was performed using Spearman correlation to assess whether high-performing networks (SSIM > 0.85) preserved quantitative imaging biomarkers. A qualitative evaluation by seven experienced physicians assessed the anatomical realism, presence of artifacts, and diagnostic interpretability of synthetic images. Additionally, classification tasks using synthetic images were conducted using two machine learning and one deep learning model to assess the practical diagnostic benefit. Among all networks, 2D-Pix2Pix achieved the highest SSIM (0.855 ± 0.032). RF analysis showed that 76 out of 186 features were preserved post-translation, while the remainder were degraded or lost. Qualitative feedback revealed consistent issues with low-level feature preservation and artifact generation, particularly in lesion-rich regions. These evaluations were conducted to assess whether synthetic MRI retained clinically relevant patterns, supported expert interpretation, and improved diagnostic accuracy. Importantly, classification performance using synthetic MRI significantly exceeded that of US-based input, achieving average accuracy and AUC of ~ 0.93 ± 0.05. Although 2D-Pix2Pix showed the best overall performance in similarity and partial RF preservation, improvements are still required in lesion-level fidelity and artifact suppression. The combination of radiomics, qualitative, and classification analyses offered a holistic view of the current strengths and limitations of I2I models, supporting their potential in clinical applications pending further refinement and validation.
Page 106 of 3433427 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.