Sort by:
Page 22 of 3993982 results

Collaborative and privacy-preserving cross-vendor united diagnostic imaging via server-rotating federated machine learning.

Wang H, Zhang X, Ren X, Zhang Z, Yang S, Lian C, Ma J, Zeng D

pubmed logopapersAug 9 2025
Federated Learning (FL) is a distributed framework that enables collaborative training of a server model across medical data vendors while preserving data privacy. However, conventional FL faces two key challenges: substantial data heterogeneity among vendors and limited flexibility from a fixed server, leading to suboptimal performance in diagnostic-imaging tasks. To address these, we propose a server-rotating federated learning method (SRFLM). Unlike traditional FL, SRFLM designates one vendor as a provisional server for federated fine-tuning, with others acting as clients. It uses a rotational server-communication mechanism and a dynamic server-election strategy, allowing each vendor to sequentially assume the server role over time. Additionally, the communication protocol of SRFLM provides strong privacy guarantees using differential privacy. We extensively evaluate SRFLM across multiple cross-vendor diagnostic imaging tasks. We envision SRFLM as paving the way to facilitate collaborative model training across medical data vendors, thereby achieving the goal of cross-vendor united diagnostic imaging.

Artificial intelligence with feature fusion empowered enhanced brain stroke detection and classification for disabled persons using biomedical images.

Alsieni M, Alyoubi KH

pubmed logopapersAug 9 2025
Brain stroke is an illness which affects almost every age group, particularly people over 65. There are two significant kinds of strokes: ischemic and hemorrhagic strokes. Blockage of brain vessels causes an ischemic stroke, while cracks in blood vessels in or around the brain cause a hemorrhagic stroke. In the prompt analysis of brain stroke, patients can live an easier life. Recognizing strokes using medical imaging is crucial for early diagnosis and treatment planning. Conversely, access to innovative imaging methods is restricted, particularly in emerging states, so it is challenging to analyze brain stroke cases of disabled people appropriately. Hence, the development of more accurate, faster, and more reliable diagnostic models for the timely recognition and efficient treatment of ischemic stroke is greatly needed. Artificial intelligence technologies, primarily deep learning (DL), have been widely employed in medical imaging, utilizing automated detection methods. This paper presents an Enhanced Brain Stroke Detection and Classification using Artificial Intelligence with Feature Fusion Technologies (EBSDC-AIFFT) model. This paper aims to develop an enhanced brain stroke detection system for individuals with disabilities, utilizing biomedical images to improve diagnostic accuracy. Initially, the image pre-processing stage involves various steps, including resizing, normalization, data augmentation, and data splitting, to enhance image quality. In addition, the EBSDC-AIFFT model combines the Inception-ResNet-v2 model, the convolutional block attention module-ResNet18 method, and the multi-axis vision transformer technique for feature extraction. Finally, the variational autoencoder (VAE) model is implemented for the classification process. The performance validation of the EBSDC-AIFFT technique is performed under the brain stroke CT image dataset. The comparison study of the EBSDC-AIFFT technique demonstrated a superior accuracy value of 99.09% over existing models.

Deep learning in rib fracture imaging: study quality assessment using the Must AI Criteria-10 (MAIC-10) checklist for artificial intelligence in medical imaging.

Getzmann JM, Nulle K, Mennini C, Viglino U, Serpi F, Albano D, Messina C, Fusco S, Gitto S, Sconfienza LM

pubmed logopapersAug 9 2025
To analyze the methodological quality of studies on deep learning (DL) in rib fracture imaging with the Must AI Criteria-10 (MAIC-10) checklist, and to report insights and experiences regarding the applicability of the MAIC-10 checklist. An electronic literature search was conducted on the PubMed database. After selection of articles, three radiologists independently rated the articles according to MAIC-10. Differences of the MAIC-10 score for each checklist item were assessed using the Fleiss' kappa coefficient. A total of 25 original articles discussing DL applications in rib fracture imaging were identified. Most studies focused on fracture detection (n = 21, 84%). In most of the research papers, internal cross-validation of the dataset was performed (n = 16, 64%), while only six studies (24%) conducted external validation. The mean MAIC-10 score of the 25 studies was 5.63 (SD, 1.84; range 1-8), with the item "clinical need" being reported most consistently (100%) and the item "study design" being most frequently reported incompletely (94.8%). The average inter-rater agreement for the MAIC-10 score was 0.771. The MAIC-10 checklist is a valid tool for assessing the quality of AI research in medical imaging with good inter-rater agreement. With regard to rib fracture imaging, items such as "study design", "explainability", and "transparency" were often not comprehensively addressed. AI in medical imaging has become increasingly common. Therefore, quality control systems of published literature such as the MAIC-10 checklist are needed to ensure high quality research output. Quality control systems are needed for research on AI in medical imaging. The MAIC-10 checklist is a valid tool to assess AI in medical imaging research quality. Checklist items such as "study design", "explainability", and "transparency" are frequently addressed incomprehensively.

Emerging trends in NanoTheranostics: Integrating imaging and therapy for precision health care.

Fahmy HM, Bayoumi L, Helal NF, Mohamed NRA, Emarh Y, Ahmed AM

pubmed logopapersAug 9 2025
Nanotheranostics has garnered significant interest for its capacity to improve customized healthcare via targeted and efficient treatment alternatives. Nanotheranostics promises an innovative approach to precision medicine by integrating therapeutic and diagnostic capabilities into nanoscale devices. Nanotheranostics provides an integrated approach that improves diagnosis and facilitates real-time, tailored treatment, revolutionizing patient care. Through the application of nanotheranostic devices, outcomes can be modified for patients on an individualized therapeutic level by taking into consideration individual differences in disease manifestation as well as treatment response. In this review, no aspect of imaging in nanotheranostics is excluded, thus including MRI and CT as well as PET and OI, which are essential for comprehensive analysis needed in medical decision making. Integration of AI and ML into theranostics facilitates predicting treatment outcomes and personalizing the approaches to the methods, which significantly enhances reproducibility in medicine. In addition, several nanoparticles such as lipid-based and polymeric particles, iron oxide, quantum dots, and mesoporous silica have shown promise in diagnosis and targeted drug delivery. These nanoparticles are capable of treating multiple diseases such as cancers, some other neurological disorders, and infectious diseases. While having potential, the field of nanotheranostics still encounters issues regarding clinical applicability, alongside some regulatory hurdles pertaining to new therapeutic agents. Advanced research in this sphere is bound to enhance existing perspectives and fundamentally aid the integration of nanomedicine into conventional health procedures, especially relating to efficacy and the growing emphasis on safe, personalized healthcare.

Deep Learning-aided <sup>1</sup>H-MR Spectroscopy for Differentiating between Patients with and without Hepatocellular Carcinoma.

Bae JS, Lee HH, Kim H, Song IC, Lee JY, Han JK

pubmed logopapersAug 9 2025
Among patients with hepatitis B virus-associated liver cirrhosis (HBV-LC), there may be differences in the hepatic parenchyma between those with and without hepatocellular carcinoma (HCC). Proton MR spectroscopy (<sup>1</sup>H-MRS) is a well-established tool for noninvasive metabolomics, but has been challenging in the liver allowing only a few metabolites to be detected other than lipids. This study aims to explore the potential of <sup>1</sup>H-MRS of the liver in conjunction with deep learning to differentiate between HBV-LC patients with and without HCC. Between August 2018 and March 2021, <sup>1</sup>H-MRS data were collected from 37 HBV-LC patients who underwent MRI for HCC surveillance, without HCC (HBV-LC group, n = 20) and with HCC (HBV-LC-HCC group, n = 17). Based on a priori knowledge from the first 10 patients from each group, big spectral datasets were simulated to develop 2 kinds of convolutional neural networks (CNNs): CNNs quantifying 15 metabolites and 5 lipid resonances (qCNNs) and CNNs classifying patients into HBV-LC and HBV-LC-HCC (cCNNs). The performance of the cCNNs was assessed using the remaining patients in the 2 groups (10 HBV-LC and 7 HBV-LC-HCC patients). Using a simulated dataset, the quantitative errors with the qCNNs were significantly lower than those with a conventional nonlinear-least-squares-fitting method for all metabolites and lipids (P ≤0.004). The cCNNs exhibited sensitivity, specificity, and accuracy of 100% (7/7), 90% (9/10), and 94% (16/17), respectively, for identifying the HBV-LC-HCC group. Deep-learning-aided <sup>1</sup>H-MRS with data augmentation by spectral simulation may have potential in differentiating between HBV-LC patients with and without HCC.

Self-supervised disc and cup segmentation via non-local deformable convolution and adaptive transformer.

Zhao W, Wang Y

pubmed logopapersAug 9 2025
Optic disc and cup segmentation is a crucial subfield of computer vision, playing a pivotal role in automated pathological image analysis. It enables precise, efficient, and automated diagnosis of ocular conditions, significantly aiding clinicians in real-world medical applications. However, due to the scarcity of medical segmentation data and the insufficient integration of global contextual information, the segmentation accuracy remains suboptimal. This issue becomes particularly pronounced in optic disc and cup cases with complex anatomical structures and ambiguous boundaries.In order to address these limitations, this paper introduces a self-supervised training strategy integrated with a newly designed network architecture to improve segmentation accuracy.Specifically,we initially propose a non-local dual deformable convolutional block,which aims to capture the irregular image patterns(i.e. boundary).Secondly,we modify the traditional vision transformer and design an adaptive K-Nearest Neighbors(KNN) transformation block to extract the global semantic context from images. Finally,an initialization strategy based on self-supervised training is proposed to reduce the burden on the network on labeled data.Comprehensive experimental evaluations demonstrate the effectiveness of our proposed method, which outperforms previous networks and achieves state-of-the-art performance,with IOU scores of 0.9577 for the optic disc and 0.8399 for the optic cup on the REFUGE dataset.

Kidney volume after endovascular exclusion of abdominal aortic aneurysms by EVAR and FEVAR.

B S, C V, Turkia J B, Weydevelt E V, R P, F L, A K

pubmed logopapersAug 9 2025
Decreased kidney volume is a sign of renal aging and/or decreased vascularization. The aim of this study was to determine whether renal volume changes 24 months after exclusion of an abdominal aortic aneurysm (AAA), and to compare fenestrated (FEVAR) and subrenal (EVAR) stents. Retrospective single-center study from a prospective registry, including patients between 60 and 80 years with normal preoperative renal function (eGFR≥60 ml/min/1.73 m<sup>-2</sup>) who underwent fenestrated (FEVAR) or infrarenal (EVAR) stent grafts between 2015 and 2021. Patients had to have had an CT scan at 24 months of the study to be included. Exclusion criteria were renal branches, the presence of preoperative renal insufficiency, a single kidney, embolization or coverage of an accessory renal artery, occlusion of a renal artery during follow-up and mention of AAA rupture. Renal volume was measured using sizing software (EndoSize, therenva) based on fully automatic deep-learning segmentation of several anatomical structures (arterial lumen, bone structure, thrombus, heart, etc.), including the kidneys. In the presence of renal cysts, these were manually excluded from the segmentation. Forty-eight patients were included (24 EVAR vs. 24 FEVAR), 96 kidneys were segmented. There was no difference between groups in age (78.9±6.7 years vs. 69.4±6.8, p=0.89), eGFR 85.8 ± 12.4 [62-107] ml/min/1.73 m<sup>-2</sup> vs. 81 ± 16.2 [42-107] (p=0.36), and renal volume 170.9 ± 29.7 [123-276] mL vs. 165.3 ± 37.4 [115-298] (p=0.12). At 24 months in the EVAR group, there was a non-significant reduction in eGFR 84.1 ± 17.2 [61-128] ml/min/1.73 m<sup>-2</sup> vs. 81 ± 16.2 [42-107] (p=0.36) or renal volume 170.9 ± 29.7 [123-276] mL vs. 165.3 ± 37.4 [115-298] (p=0.12). In the FEVAR group, at 24 months there was a non-significant fall in eGFR 84.1 ± 17.2 [61-128] ml/min/1.73 m<sup>-2</sup> vs. 73.8 ± 21.4 [40-110] (p=0.09), while renal volume decreased significantly 182 ± 37.8 [123-293] mL vs. 158.9 ± 40.2 [45-258] (p=0.007). In this study, there appears to be a significant decrease in renal volume without a drop in eGFR 24 months after fenestrated stenting. This decrease may reflect changes in renal perfusion and could potentially be predictive of long-term renal impairment, although this cannot be confirmed within the limits of this small sample. Further studies with long-term follow-up are needed.

From Explainable to Explained AI: Ideas for Falsifying and Quantifying Explanations

Yoni Schirris, Eric Marcus, Jonas Teuwen, Hugo Horlings, Efstratios Gavves

arxiv logopreprintAug 9 2025
Explaining deep learning models is essential for clinical integration of medical image analysis systems. A good explanation highlights if a model depends on spurious features that undermines generalization and harms a subset of patients or, conversely, may present novel biological insights. Although techniques like GradCAM can identify influential features, they are measurement tools that do not themselves form an explanation. We propose a human-machine-VLM interaction system tailored to explaining classifiers in computational pathology, including multi-instance learning for whole-slide images. Our proof of concept comprises (1) an AI-integrated slide viewer to run sliding-window experiments to test claims of an explanation, and (2) quantification of an explanation's predictiveness using general-purpose vision-language models. The results demonstrate that this allows us to qualitatively test claims of explanations and can quantifiably distinguish competing explanations. This offers a practical path from explainable AI to explained AI in digital pathology and beyond. Code and prompts are available at https://github.com/nki-ai/x2x.

Quantitative radiomic analysis of computed tomography scans using machine and deep learning techniques accurately predicts histological subtypes of non-small cell lung cancer: A retrospective analysis.

Panchawagh S, Halder A, Haldule S, Sanker V, Lalwani D, Sequeria R, Naik H, Desai A

pubmed logopapersAug 9 2025
Non-small cell lung cancer (NSCLC) histological subtypes impact treatment decisions. While pre-surgical histopathological examination is ideal, it's not always possible. CT radiomic analysis shows promise in predicting NSCLC histological subtypes. To predict NSCLC histological subtypes using machine learning and deep learning models using Radiomic features. 422 lung CT scans from The Cancer Imaging Archive (TCIA) were analyzed. Primary neoplasms were segmented by expert radiologists. Using PyRadiomics, 2446 radiomic features were extracted; post-selection, 179 features remained. Machine learning models like logistic regression (LR), Support vector machine (SVM), Random Forest (RF), XGBoost, LightGBM, and CatBoost were employed, alongside a deep neural network (DNN) model. RF demonstrated the highest accuracy at 78 % (95 % CI: 70 %-84 %) and AUC-ROC at 94 % (95 % CI: 90 %-96 %). LightGBM, XGBoost, and CatBoost had AUC-ROC values of 95 %, 93 %, and 93 % respectively. The DNN's AUC was 94.4 % (95 % CI: 94.1 %-94.6 %). Logistic regression had the least efficacy. For histological subtype prediction, random forest, boosting models, and DNN were superior. Quantitative radiomic analysis with machine learning can accurately determine NSCLC histological subtypes. Random forest, ensemble models, and DNNs show significant promise for pre-operative NSCLC classification, which can streamline therapy decisions.

SamRobNODDI: q-space sampling-augmented continuous representation learning for robust and generalized NODDI.

Xiao T, Cheng J, Fan W, Dong E, Wang S

pubmed logopapersAug 8 2025
Neurite Orientation Dispersion and Density Imaging (NODDI) microstructure estimation from diffusion magnetic resonance imaging (dMRI) is of great significance for the discovery and treatment of various neurological diseases. Current deep learning-based methods accelerate the speed of NODDI parameter estimation and improve the accuracy. However, most methods require the number and coordinates of gradient directions during testing and training to remain strictly consistent, significantly limiting the generalization and robustness of these models in NODDI parameter estimation. Therefore, it is imperative to develop methods that can perform robustly under varying diffusion gradient directions. In this paper, we propose a q-space sampling augmentation-based continuous representation learning framework (SamRobNODDI) to achieve robust and generalized NODDI. Specifically, a continuous representation learning method based on q-space sampling augmentation is introduced to fully explore the information between different gradient directions in q- space. Furthermore, we design a sampling consistency loss to constrain the outputs of different sampling schemes, ensuring that the outputs remain as consistent as possible, thereby further enhancing performance and robustness to varying q-space sampling schemes. SamRobNODDI is also a flexible framework that can be applied to different backbone networks. SamRobNODDI was compared against seven state-of-the-art methods across 18 diverse q-space sampling schemes. Extensive experimental validations have been conducted under both identical and diverse sampling schemes for training and testing, as well as across varying sampling rates, different loss functions, and multiple network backbones. Results demonstrate that the proposed SamRobNODDI has better performance, robustness, generalization, and flexibility in the face of varying q-space sampling schemes.&#xD.
Page 22 of 3993982 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.