Sort by:
Page 109 of 1401394 results

Dual-Domain deep prior guided sparse-view CT reconstruction with multi-scale fusion attention.

Wu J, Lin J, Jiang X, Zheng W, Zhong L, Pang Y, Meng H, Li Z

pubmed logopapersMay 15 2025
Sparse-view CT reconstruction is a challenging ill-posed inverse problem, where insufficient projection data leads to degraded image quality with increased noise and artifacts. Recent deep learning approaches have shown promising results in CT reconstruction. However, existing methods often neglect projection data constraints and rely heavily on convolutional neural networks, resulting in limited feature extraction capabilities and inadequate adaptability. To address these limitations, we propose a Dual-domain deep Prior-guided Multi-scale fusion Attention (DPMA) model for sparse-view CT reconstruction, aiming to enhance reconstruction accuracy while ensuring data consistency and stability. First, we establish a residual regularization strategy that applies constraints on the difference between the prior image and target image, effectively integrating deep learning-based priors with model-based optimization. Second, we develop a multi-scale fusion attention mechanism that employs parallel pathways to simultaneously model global context, regional dependencies, and local details in a unified framework. Third, we incorporate a physics-informed consistency module based on range-null space decomposition to ensure adherence to projection data constraints. Experimental results demonstrate that DPMA achieves improved reconstruction quality compared to existing approaches, particularly in noise suppression, artifact reduction, and fine detail preservation.

A monocular endoscopic image depth estimation method based on a window-adaptive asymmetric dual-branch Siamese network.

Chong N, Yang F, Wei K

pubmed logopapersMay 15 2025
Minimally invasive surgery involves entering the body through small incisions or natural orifices, using a medical endoscope for observation and clinical procedures. However, traditional endoscopic images often suffer from low texture and uneven illumination, which can negatively impact surgical and diagnostic outcomes. To address these challenges, many researchers have applied deep learning methods to enhance the processing of endoscopic images. This paper proposes a monocular medical endoscopic image depth estimation method based on a window-adaptive asymmetric dual-branch Siamese network. In this network, one branch focuses on processing global image information, while the other branch concentrates on local details. An improved lightweight Squeeze-and-Excitation (SE) module is added to the final layer of each branch, dynamically adjusting the inter-channel weights through self-attention. The outputs from both branches are then integrated using a lightweight cross-attention feature fusion module, enabling cross-branch feature interaction and enhancing the overall feature representation capability of the network. Extensive ablation and comparative experiments were conducted on medical datasets (EAD2019, Hamlyn, M2caiSeg, UCL) and a non-medical dataset (NYUDepthV2), with both qualitative and quantitative results-measured in terms of RMSE, AbsRel, FLOPs and running time-demonstrating the superiority of the proposed model. Additionally, comparisons with CT images show good organ boundary matching capability, highlighting the potential of our method for clinical applications. The key code of this paper is available at: https://github.com/superchongcnn/AttenAdapt_DE .

Enhancing medical explainability in deep learning for age-related macular degeneration diagnosis.

Shi L

pubmed logopapersMay 15 2025
Deep learning models hold significant promise for disease diagnosis but often lack transparency in their decision-making processes, limiting trust and hindering clinical adoption. This study introduces a novel multi-task learning framework to enhance the medical explainability of deep learning models for diagnosing age-related macular degeneration (AMD) using fundus images. The framework simultaneously performs AMD classification and lesion segmentation, allowing the model to support its diagnoses with AMD-associated lesions identified through segmentation. In addition, we perform an in-depth interpretability analysis of the model, proposing the Medical Explainability Index (MXI), a novel metric that quantifies the medical relevance of the generated heatmaps by comparing them with the model's lesion segmentation output. This metric provides a measurable basis to evaluate whether the model's decisions are grounded in clinically meaningful information. The proposed method was trained and evaluated on the Automatic Detection Challenge on Age-Related Macular Degeneration (ADAM) dataset. Experimental results demonstrate robust performance, achieving an area under the curve (AUC) of 0.96 for classification and a Dice similarity coefficient (DSC) of 0.59 for segmentation, outperforming single-task models. By offering interpretable and clinically relevant insights, our approach aims to foster greater trust in AI-driven disease diagnosis and facilitate its adoption in clinical practice.

Machine learning for grading prediction and survival analysis in high grade glioma.

Li X, Huang X, Shen Y, Yu S, Zheng L, Cai Y, Yang Y, Zhang R, Zhu L, Wang E

pubmed logopapersMay 15 2025
We developed and validated a magnetic resonance imaging (MRI)-based radiomics model for the classification of high-grade glioma (HGG) and determined the optimal machine learning (ML) approach. This retrospective analysis included 184 patients (59 grade III lesions and 125 grade IV lesions). Radiomics features were extracted from MRI with T1-weighted imaging (T1WI). The least absolute shrinkage and selection operator (LASSO) feature selection method and seven classification methods including logistic regression, XGBoost, Decision Tree, Random Forest (RF), Adaboost, Gradient Boosting Decision Tree, and Stacking fusion model were used to differentiate HGG. Performance was compared on AUC, sensitivity, accuracy, precision and specificity. In the non-fusion models, the best performance was achieved by using the XGBoost classifier, and using SMOTE to deal with the data imbalance to improve the performance of all the classifiers. The Stacking fusion model performed the best, with an AUC = 0.95 (sensitivity of 0.84; accuracy of 0.85; F1 score of 0.85). MRI-based quantitative radiomics features have good performance in identifying the classification of HGG. The XGBoost method outperforms the classifiers in the non-fusion model and the Stacking fusion model outperforms the non-fusion model.

Machine learning-based prognostic subgrouping of glioblastoma: A multicenter study.

Akbari H, Bakas S, Sako C, Fathi Kazerooni A, Villanueva-Meyer J, Garcia JA, Mamourian E, Liu F, Cao Q, Shinohara RT, Baid U, Getka A, Pati S, Singh A, Calabrese E, Chang S, Rudie J, Sotiras A, LaMontagne P, Marcus DS, Milchenko M, Nazeri A, Balana C, Capellades J, Puig J, Badve C, Barnholtz-Sloan JS, Sloan AE, Vadmal V, Waite K, Ak M, Colen RR, Park YW, Ahn SS, Chang JH, Choi YS, Lee SK, Alexander GS, Ali AS, Dicker AP, Flanders AE, Liem S, Lombardo J, Shi W, Shukla G, Griffith B, Poisson LM, Rogers LR, Kotrotsou A, Booth TC, Jain R, Lee M, Mahajan A, Chakravarti A, Palmer JD, DiCostanzo D, Fathallah-Shaykh H, Cepeda S, Santonocito OS, Di Stefano AL, Wiestler B, Melhem ER, Woodworth GF, Tiwari P, Valdes P, Matsumoto Y, Otani Y, Imoto R, Aboian M, Koizumi S, Kurozumi K, Kawakatsu T, Alexander K, Satgunaseelan L, Rulseh AM, Bagley SJ, Bilello M, Binder ZA, Brem S, Desai AS, Lustig RA, Maloney E, Prior T, Amankulor N, Nasrallah MP, O'Rourke DM, Mohan S, Davatzikos C

pubmed logopapersMay 15 2025
Glioblastoma (GBM) is the most aggressive adult primary brain cancer, characterized by significant heterogeneity, posing challenges for patient management, treatment planning, and clinical trial stratification. We developed a highly reproducible, personalized prognostication, and clinical subgrouping system using machine learning (ML) on routine clinical data, magnetic resonance imaging (MRI), and molecular measures from 2838 demographically diverse patients across 22 institutions and 3 continents. Patients were stratified into favorable, intermediate, and poor prognostic subgroups (I, II, and III) using Kaplan-Meier analysis (Cox proportional model and hazard ratios [HR]). The ML model stratified patients into distinct prognostic subgroups with HRs between subgroups I-II and I-III of 1.62 (95% CI: 1.43-1.84, P < .001) and 3.48 (95% CI: 2.94-4.11, P < .001), respectively. Analysis of imaging features revealed several tumor properties contributing unique prognostic value, supporting the feasibility of a generalizable prognostic classification system in a diverse cohort. Our ML model demonstrates extensive reproducibility and online accessibility, utilizing routine imaging data rather than complex imaging protocols. This platform offers a unique approach to personalized patient management and clinical trial stratification in GBM.

Characterizing ASD Subtypes Using Morphological Features from sMRI with Unsupervised Learning.

Raj A, Ratnaik R, Sengar SS, Fredo ARJ

pubmed logopapersMay 15 2025
In this study, we attempted to identify the subtypes of autism spectrum disorder (ASD) with the help of anatomical alterations found in structural magnetic resonance imaging (sMRI) data of the ASD brain and machine learning tools. Initially, the sMRI data was preprocessed using the FreeSurfer toolbox. Further, the brain regions were segmented into 148 regions of interest using the Destrieux atlas. Features such as volume, thickness, surface area, and mean curvature were extracted for each brain region. We performed principal component analysis independently on the volume, thickness, surface area, and mean curvature features and identified the top 10 features. Further, we applied k-means clustering on these top 10 features and validated the number of clusters using Elbow and Silhouette method. Our study identified two clusters in the dataset which significantly shows the existence of two subtypes in ASD. We identified the features such as volume of scaled lh_G_front middle, thickness of scaled rh_S_temporal transverse, area of scaled lh_S_temporal sup, and mean curvature of scaled lh_G_precentral as the significant features discriminating the two clusters with statistically significant p-value (p<0.05). Thus, our proposed method is effective for the identification of ASD subtypes and can also be useful for the screening of other similar neurological disorders.

Energy-Efficient AI for Medical Diagnostics: Performance and Sustainability Analysis of ResNet and MobileNet.

Rehman ZU, Hassan U, Islam SU, Gallos P, Boudjadar J

pubmed logopapersMay 15 2025
Artificial intelligence (AI) has transformed medical diagnostics by enhancing the accuracy of disease detection, particularly through deep learning models to analyze medical imaging data. However, the energy demands of training these models, such as ResNet and MobileNet, are substantial and often overlooked; however, researchers mainly focus on improving model accuracy. This study compares the energy use of these two models for classifying thoracic diseases using the well-known CheXpert dataset. We calculate power and energy consumption during training using the EnergyEfficientAI library. Results demonstrate that MobileNet outperforms ResNet by consuming less power and completing training faster, resulting in lower overall energy costs. This study highlights the importance of prioritizing energy efficiency in AI model development, promoting sustainable, eco-friendly approaches to advance medical diagnosis.

Does Whole Brain Radiomics on Multimodal Neuroimaging Make Sense in Neuro-Oncology? A Proof of Concept Study.

Danilov G, Kalaeva D, Vikhrova N, Shugay S, Telysheva E, Goraynov S, Kosyrkova A, Pavlova G, Pronin I, Usachev D

pubmed logopapersMay 15 2025
Employing a whole-brain (WB) mask as a region of interest for extracting radiomic features is a feasible, albeit less common, approach in neuro-oncology research. This study aims to evaluate the relationship between WB radiomic features, derived from various neuroimaging modalities in patients with gliomas, and some key baseline characteristics of patients and tumors such as sex, histological tumor type, WHO Grade (2021), IDH1 mutation status, necrosis lesions, contrast enhancement, T/N peak value and metabolic tumor volume. Forty-one patients (average age 50 ± 15 years, 21 females and 20 males) with supratentorial glial tumors were enrolled in this study. A total of 38,720 radiomic features were extracted. Cluster analysis revealed that whole-brain images of biologically different tumors could be distinguished to a certain extent based on their imaging biomarkers. Machine learning capabilities to detect image properties like contrast-enhanced or necrotic zones validated radiomic features in objectifying image semantics. Furthermore, the predictive capability of imaging biomarkers in determining tumor histology, grade and mutation type underscores their diagnostic potential. Whole-brain radiomics using multimodal neuroimaging data appeared to be informative in neuro-oncology, making research in this area well justified.

Participatory Co-Creation of an AI-Supported Patient Information System: A Multi-Method Qualitative Study.

Heizmann C, Gleim P, Kellmeyer P

pubmed logopapersMay 15 2025
In radiology and other medical fields, informed consent often rely on paper-based forms, which can overwhelm patients with complex terminology. These forms are also resource-intensive. The KIPA project addresses these challenges by developing an AI-assisted patient information system to streamline the consent process, improve patient understanding, and reduce healthcare workload. The KIPA system uses natural language processing (NLP) to provide real-time, accessible explanations, answer questions, and support informed consent. KIPA follows an 'ethics-by-design' approach, integrating user feedback to align with patient and clinician needs. Interviews and usability testing identified requirements, such as simplified language and support for varying digital literacy. The study presented here explores the participatory co-creation of the KIPA system, focusing on improving informed consent in radiology through a multi-method qualitative approach. Preliminary results suggest that KIPA improves patient engagement and reduces insecurities by providing proactive guidance and tailored information. Future work will extend testing to other stakeholders and assess the impact of the system on clinical workflow.

A Deep-Learning Framework for Ovarian Cancer Subtype Classification Using Whole Slide Images.

Wang C, Yi Q, Aflakian A, Ye J, Arvanitis T, Dearn KD, Hajiyavand A

pubmed logopapersMay 15 2025
Ovarian cancer, a leading cause of cancer-related deaths among women, comprises distinct subtypes each requiring different treatment approaches. This paper presents a deep-learning framework for classifying ovarian cancer subtypes using Whole Slide Imaging (WSI). Our method contains three stages: image tiling, feature extraction, and multi-instance learning. Our approach is trained and validated on a public dataset from 80 distinct patients, achieving up to 89,8% accuracy with a notable improvement in computational efficiency. The results demonstrate the potential of our framework to augment diagnostic precision in clinical settings, offering a scalable solution for the accurate classification of ovarian cancer subtypes.
Page 109 of 1401394 results
Show
per page
Get Started

Upload your X-ray image and get interpretation.

Upload now →

Disclaimer: X-ray Interpreter's AI-generated results are for informational purposes only and not a substitute for professional medical advice. Always consult a healthcare professional for medical diagnosis and treatment.