Sort by:
Page 319 of 6626611 results

Alatrany AS, Lakhani K, Cowley AC, Yeo JL, Dattani A, Ayton SL, Deshpande A, Graham-Brown MPM, Davies MJ, Khunti K, Yates T, Sellers SL, Zhou H, Brady EM, Arnold JR, Deane J, McLean RJ, Proudlock FA, McCann GP, Gulsin GS

pubmed logopapersJul 31 2025
Individuals with Type 2 Diabetes (T2D) are at high risk of subclinical cardiovascular disease (CVD), potentially detectable through retinal alterations. In this single-centre, prospective cohort study, 255 asymptomatic adults with T2D and no prior history of CVD underwent echocardiography, non-contrast coronary computed tomography and cardiovascular magnetic resonance. Retinal photographs were evaluated for diabetic retinopathy grade and microvascular geometric characteristics using deep learning (DL) tools. Associations with cardiac imaging markers of subclinical CVD were explored. Of the participants (aged 64 ± 7 years, 62% males); 200 (78%) had no diabetic retinopathy and 55 (22%) had mild background retinopathy. Groups were well-matched for age, sex, ethnicity, CV risk factors, urine microalbuminuria, and serum natriuretic peptide and high-sensitivity troponin levels. Presence of retinopathy was associated with a greater burden of coronary atherosclerosis (coronary artery calcium score ≥ 100; OR 2.63; 95% CI 1.29–5.36; <i>P</i> = 0.008), more concentric left ventricular remodelling (OR 3.11; 95% CI 1.50–6.45; <i>P</i> = 0.002), and worse global longitudinal strain (OR 2.32; 95% CI 1.18–4.59; <i>P</i> = 0.015), independent of key co-variables. Early diabetic retinopathy is associated with a high burden of coronary atherosclerosis and markers of early heart failure. Routine diabetic eye screening may serve as an effective alternative to currently advocated screening tests for detecting subclinical CVD in T2D, presenting opportunities for earlier detection and intervention. The online version contains supplementary material available at 10.1038/s41598-025-13468-4.

Guerrin CGJ, Tesselaar DRM, Booij J, Schellekens AFA, Homberg JR

pubmed logopapersJul 31 2025
Substance use disorders (SUD) are chronic, relapsing conditions marked by high variability in treatment response and frequent relapse. This variability arises from complex interactions among behavioral, environmental, and biological factors unique to each individual. Precision medicine, which tailors treatment to patient-specific characteristics, offers a promising avenue to address these challenges. This review explores key factors influencing SUD, including severity, comorbidities, drug use motives, polysubstance use, cognitive impairments, and biological and environmental influences. Advanced neuroimaging, such as MRI and PET, enables patient subtyping by identifying altered brain mechanisms, including reward, relief, and cognitive pathways, and striatal dopamine D<sub>2/3</sub> receptor binding. Pharmacogenetic and epigenetic studies uncover how variations in dopaminergic, serotoninergic, and opioidergic systems shape treatment outcomes. Emerging biomarkers, such as neurofilament light chain, offer non-invasive relapse monitoring. Multifactorial models integrating behavioral and neural markers outperform single-factor approaches in predicting treatment success. Machine learning refines these models, while longitudinal and preclinical studies support individualized care. Despite translational hurdles, precision medicine offers transformative potential for improving SUD treatment outcomes.

Fupei Guo, Hao Zheng, Xiang Zhang, Li Chen, Yue Wang, Songyang Zhang

arxiv logopreprintJul 31 2025
The rapid development of artificial intelligence has driven smart health with next-generation wireless communication technologies, stimulating exciting applications in remote diagnosis and intervention. To enable a timely and effective response for remote healthcare, efficient transmission of medical data through noisy channels with limited bandwidth emerges as a critical challenge. In this work, we propose a novel diffusion-based semantic communication framework, namely DiSC-Med, for the medical image transmission, where medical-enhanced compression and denoising blocks are developed for bandwidth efficiency and robustness, respectively. Unlike conventional pixel-wise communication framework, our proposed DiSC-Med is able to capture the key semantic information and achieve superior reconstruction performance with ultra-high bandwidth efficiency against noisy channels. Extensive experiments on real-world medical datasets validate the effectiveness of our framework, demonstrating its potential for robust and efficient telehealth applications.

Ismail FA, Karim MKA, Zaidon SIA, Noor KA

pubmed logopapersJul 31 2025
Breast cancer is a major cause of mortality among women globally. While mammography remains the gold standard for detection, its interpretation is often limited by radiologist variability and the challenge of differentiating benign and malignant lesions. The study explores the use of Auto- Sklearn, an automated machine learning (AutoML) framework, for breast tumor classification based on mammographic radiomic features. 244 mammographic images were enhanced using Contrast Limited Adaptive Histogram Equalization (CLAHE) and segmented with Active Contour Method (ACM). Thirty-seven radiomic features, including first-order statistics, Gray-Level Co-occurance Matrix (GLCM) texture and shape features were extracted and standardized. Auto-Sklearn was employed to automate model selection, hyperparameter tuning and ensemble construction. The dataset was divided into 80% training and 20% testing set. The initial Auto-Sklearn model achieved an 88.71% accuracy on the training set and 55.10% on the testing sets. After the resampling strategy was applied, the accuracy for the training set and testing set increased to 95.26% and 76.16%, respectively. The Receiver Operating Curve and Area Under Curve (ROC-AUC) for the standard and resampling strategy of Auto-Sklearn were 0.660 and 0.840, outperforming conventional models, demonstrating its efficiency in automating radiomic classification tasks. The findings underscore Auto-Sklearn's ability to automate and enhance tumor classification performance using handcrafted radiomic features. Limitations include dataset size and absence of clinical metadata. This study highlights the application of Auto-Sklearn as a scalable, automated and clinically relevant tool for breast cancer classification using mammographic radiomics.

Amin M, Scullin K, Nakamura K, Ontaneda D, Galioto R

pubmed logopapersJul 31 2025
Cognitive impairment (CI) in people with MS (pwMS) has complex pathophysiology. Neuropsychological testing (NPT) can be helpful, but interpretation may be challenging for clinicians. Thalamic atrophy (TA) has shown correlation for both neurodegeneration and CI. Leverage machine learning methods to link CI and longitudinal neuroimaging biomarkers. Retrospective review of adult pwMS with NPT and ≥2 brain MRIs. Quantitative MRI regional change rates were calculated using mixed effects models. Participants were divided into training and validation cohorts. K-means clustering was done based on first and second NPT principal components (PC1 and PC2). MRI change rates were compared between clusters. 112 participants were included (mean age 48 years, 71 % female, 80 % relapsing remitting). Processing speed and memory were the major contributors to PC1. We identified two clusters based on PC1, one with significantly more TA in both training and validation cohorts (p = 0.035; p = 0.002) and similar rates of change in all other quantitative MRI measures. The most important contributors to PC1 included measures of processing speed (SDMT/WAIS Coding) and memory (List Learning/BVMT immediate and delayed recall). This clustering method identified a profile of NPT results strongly linked to and possibly driven by TA. These results confirm validity of previously established findings using more advanced analyses in addition to offering novel insights into NPT dimensionality reduction.

Liu Li, Qiang Ma, Cheng Ouyang, Johannes C. Paetzold, Daniel Rueckert, Bernhard Kainz

arxiv logopreprintJul 31 2025
Deep learning-based medical image segmentation techniques have shown promising results when evaluated based on conventional metrics such as the Dice score or Intersection-over-Union. However, these fully automatic methods often fail to meet clinically acceptable accuracy, especially when topological constraints should be observed, e.g., continuous boundaries or closed surfaces. In medical image segmentation, the correctness of a segmentation in terms of the required topological genus sometimes is even more important than the pixel-wise accuracy. Existing topology-aware approaches commonly estimate and constrain the topological structure via the concept of persistent homology (PH). However, these methods are difficult to implement for high dimensional data due to their polynomial computational complexity. To overcome this problem, we propose a novel and fast approach for topology-aware segmentation based on the Euler Characteristic ($\chi$). First, we propose a fast formulation for $\chi$ computation in both 2D and 3D. The scalar $\chi$ error between the prediction and ground-truth serves as the topological evaluation metric. Then we estimate the spatial topology correctness of any segmentation network via a so-called topological violation map, i.e., a detailed map that highlights regions with $\chi$ errors. Finally, the segmentation results from the arbitrary network are refined based on the topological violation maps by a topology-aware correction network. Our experiments are conducted on both 2D and 3D datasets and show that our method can significantly improve topological correctness while preserving pixel-wise segmentation accuracy.

Strijbis, V. I. J., Gurney-Champion, O. J., Grama, D. I., Slotman, B. J., Verbakel, W. F. A. R.

medrxiv logopreprintJul 31 2025
BackgroundConvolutional neural networks (CNNs) have emerged to reduce clinical resources and standardize auto-contouring of organs-at-risk (OARs). Although CNNs perform adequately for most patients, understanding when the CNN might fail is critical for effective and safe clinical deployment. However, the limitations of CNNs are poorly understood because of their black-box nature. Explainable artificial intelligence (XAI) can expose CNNs inner mechanisms for classification. Here, we investigate the inner mechanisms of CNNs for segmentation and explore a novel, computational approach to a-priori flag potentially insufficient parotid gland (PG) contours. MethodsFirst, 3D UNets were trained in three PG segmentation situations using (1) synthetic cases; (2) 1925 clinical computed tomography (CT) scans with typical and (3) more consistent contours curated through a previously validated auto-curation step. Then, we generated attribution maps for seven XAI methods, and qualitatively assessed them for congruency between simulated and clinical contours, and how much XAI agreed with expert reasoning. To objectify observations, we explored persistent homology intensity filtrations to capture essential topological characteristics of XAI attributions. Principal component (PC) eigenvalues of Euler characteristic profiles were correlated with spatial agreement (Dice-Sorensen similarity coefficient; DSC). Evaluation was done using sensitivity, specificity and the area under receiver operating characteristic (AUROC) curve on an external AAPM dataset, where as proof-of-principle, we regard the lowest 15% DSC as insufficient. ResultsPatternNet attributions (PNet-A) focused on soft-tissue structures, whereas guided backpropagation (GBP) highlighted both soft-tissue and high-density structures (e.g. mandible bone), which was congruent with synthetic situations. Both methods typically had higher/denser activations in better auto-contoured medial and anterior lobes. Curated models produced "cleaner" gradient class-activation mapping (GCAM) attributions. Quantitative analysis showed that PC{lambda}1 of guided GCAMs (GGCAM) Euler characteristic (EC) profile had good predictive value (sensitivity>0.85, specificity>0.9) of DSC for AAPM cases, with AUROC=0.66, 0.74, 0.94, 0.83 for GBP, GCAM, GGCAM and PNet-A. For for {lambda}1<-1.8e3 of GGCAMs EC-profile, 87% of cases were insufficient. ConclusionsGBP and PNet-A qualitatively agreed most with expert reasoning on directly (structure borders) and indirectly (proxies used for identifying structure borders) important features for PG segmentation. Additionally, this work investigated as proof-of-principle how topological data analysis could possibly be used for quantitative XAI signal analysis to a-priori mark potentially inadequate CNN-segmentations, using only features from inside the predicted PG. This work used PG as a well-understood segmentation paradigm and may extend to target volumes and other organs-at-risk.

Ju M, Wang B, Zhao Z, Zhang S, Yang S, Wei Z

pubmed logopapersJul 31 2025
Teacher-Student (TS) networks have become the mainstream frameworks of semi-supervised deep learning, and are widely used in medical image segmentation. However, traditional TSs based on single or homogeneous encoders often struggle to capture the rich semantic details required for complex, fine-grained tasks. To address this, we propose a novel semi-supervised medical image segmentation framework (IHE-Net), which makes good use of the feature discrepancies of two heterogeneous encoders to improve segmentation performance. The two encoders are instantiated by different learning paradigm networks, namely CNN and Transformer/Mamba, respectively, to extract richer and more robust context representations from unlabeled data. On this basis, we propose a simple yet powerful multi-level feature discrepancy fusion module (MFDF), which effectively integrates different modal features and their discrepancies from two heterogeneous encoders. This design enhances the representational capacity of the model through efficient fusion without introducing additional computational overhead. Furthermore, we introduce a triple consistency learning strategy to improve predictive stability by setting dual decoders and adding mixed output consistency. Extensive experimental results on three skin lesion segmentation datasets, ISIC2017, ISIC2018, and PH2, demonstrate the superiority of our framework. Ablation studies further validate the rationale and effectiveness of the proposed method. Code is available at: https://github.com/joey-AI-medical-learning/IHE-Net.

Md. Ehsanul Haque, Abrar Fahim, Shamik Dey, Syoda Anamika Jahan, S. M. Jahidul Islam, Sakib Rokoni, Md Sakib Morshed

arxiv logopreprintJul 31 2025
Early and accurate detection of the bone fracture is paramount to initiating treatment as early as possible and avoiding any delay in patient treatment and outcomes. Interpretation of X-ray image is a time consuming and error prone task, especially when resources for such interpretation are limited by lack of radiology expertise. Additionally, deep learning approaches used currently, typically suffer from misclassifications and lack interpretable explanations to clinical use. In order to overcome these challenges, we propose an automated framework of bone fracture detection using a VGG-19 model modified to our needs. It incorporates sophisticated preprocessing techniques that include Contrast Limited Adaptive Histogram Equalization (CLAHE), Otsu's thresholding, and Canny edge detection, among others, to enhance image clarity as well as to facilitate the feature extraction. Therefore, we use Grad-CAM, an Explainable AI method that can generate visual heatmaps of the model's decision making process, as a type of model interpretability, for clinicians to understand the model's decision making process. It encourages trust and helps in further clinical validation. It is deployed in a real time web application, where healthcare professionals can upload X-ray images and get the diagnostic feedback within 0.5 seconds. The performance of our modified VGG-19 model attains 99.78\% classification accuracy and AUC score of 1.00, making it exceptionally good. The framework provides a reliable, fast, and interpretable solution for bone fracture detection that reasons more efficiently for diagnoses and better patient care.

Wenjie Li, Yujie Zhang, Haoran Sun, Yueqi Li, Fanrui Zhang, Mengzhe Xu, Victoria Borja Clausich, Sade Mellin, Renhao Yang, Chenrun Wang, Jethro Zih-Shuo Wang, Shiyi Yao, Gen Li, Yidong Xu, Hanyu Wang, Yilin Huang, Angela Lin Wang, Chen Shi, Yin Zhang, Jianan Guo, Luqi Yang, Renxuan Li, Yang Xu, Jiawei Liu, Yao Zhang, Lei Liu, Carlos Gutiérrez SanRomán, Lei Wang

arxiv logopreprintJul 31 2025
Chest X-ray (CXR) imaging is one of the most widely used diagnostic modalities in clinical practice, encompassing a broad spectrum of diagnostic tasks. Recent advancements have seen the extensive application of reasoning-based multimodal large language models (MLLMs) in medical imaging to enhance diagnostic efficiency and interpretability. However, existing multimodal models predominantly rely on "one-time" diagnostic approaches, lacking verifiable supervision of the reasoning process. This leads to challenges in multi-task CXR diagnosis, including lengthy reasoning, sparse rewards, and frequent hallucinations. To address these issues, we propose CX-Mind, the first generative model to achieve interleaved "think-answer" reasoning for CXR tasks, driven by curriculum-based reinforcement learning and verifiable process rewards (CuRL-VPR). Specifically, we constructed an instruction-tuning dataset, CX-Set, comprising 708,473 images and 2,619,148 samples, and generated 42,828 high-quality interleaved reasoning data points supervised by clinical reports. Optimization was conducted in two stages under the Group Relative Policy Optimization framework: initially stabilizing basic reasoning with closed-domain tasks, followed by transfer to open-domain diagnostics, incorporating rule-based conditional process rewards to bypass the need for pretrained reward models. Extensive experimental results demonstrate that CX-Mind significantly outperforms existing medical and general-domain MLLMs in visual understanding, text generation, and spatiotemporal alignment, achieving an average performance improvement of 25.1% over comparable CXR-specific models. On real-world clinical dataset (Rui-CXR), CX-Mind achieves a mean recall@1 across 14 diseases that substantially surpasses the second-best results, with multi-center expert evaluations further confirming its clinical utility across multiple dimensions.
Page 319 of 6626611 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.