Sort by:
Page 104 of 3513502 results

Parental and carer views on the use of AI in imaging for children: a national survey.

Agarwal G, Salami RK, Lee L, Martin H, Shantharam L, Thomas K, Ashworth E, Allan E, Yung KW, Pauling C, Leyden D, Arthurs OJ, Shelmerdine SC

pubmed logopapersAug 9 2025
Although the use of artificial intelligence (AI) in healthcare is increasing, stakeholder engagement remains poor, particularly relating to understanding parent/carer acceptance of AI tools in paediatric imaging. We explore these perceptions and compare them to the opinions of children and young people (CYAP). A UK national online survey was conducted, inviting parents, carers and guardians of children to participate. The survey was "live" from June 2022 to 2023. The survey included questions asking about respondents' views of AI in general, as well as in specific circumstances (e.g. fractures) with respect to children's healthcare. One hundred forty-six parents/carers (mean age = 45; range = 21-80) from all four nations of the UK responded. Most respondents (93/146, 64%) believed that AI would be more accurate at interpreting paediatric musculoskeletal radiographs than healthcare professionals, but had a strong preference for human supervision (66%). Whilst male respondents were more likely to believe that AI would be more accurate (55/72, 76%), they were twice as likely as female parents/carers to believe that AI use could result in their child's data falling into the wrong hands. Most respondents would like to be asked permission before AI is used for the interpretation of their child's scans (104/146, 71%). Notably, 79% of parents/carers prioritised accuracy over speed compared to 66% of CYAP. Parents/carers feel positively about AI for paediatric imaging but strongly discourage autonomous use. Acknowledging the diverse opinions of the patient population is vital in aiding the successful integration of AI for paediatric imaging. Parents/carers demonstrate a preference for AI use with human supervision that prioritises accuracy, transparency and institutional accountability. AI is welcomed as a supportive tool, but not as a substitute for human expertise. Parents/carers are accepting of AI use, with human supervision. Over half believe AI would replace doctors/nurses looking at bone X-rays within 5 years. Parents/carers are more likely than CYAP to trust AI's accuracy. Parents/carers are also more sceptical about AI data misuse.

Enhanced hyper tuning using bioinspired-based deep learning model for accurate lung cancer detection and classification.

Kumari J, Sinha S, Singh L

pubmed logopapersAug 9 2025
Lung cancer (LC) is one of the leading causes of cancer related deaths worldwide and early recognition is critical for enhancing patient outcomes. However, existing LC detection techniques face challenges such as high computational demands, complex data integration, scalability limitations, and difficulties in achieving rigorous clinical validation. This research proposes an Enhanced Hyper Tuning Deep Learning (EHTDL) model utilizing bioinspired algorithms to overcome these limitations and improve accuracy and efficiency of LC detection and classification. The methodology begins with the Smooth Edge Enhancement (SEE) technique for preprocessing CT images, followed by feature extraction using GLCM-based Texture Analysis. To refine the features and reduce dimensionality, a Hybrid Feature Selection approach combining Grey Wolf optimization (GWO) and Differential Evolution (DE) is employed. Precise lung segmentation is performed using Mask R-CNN to ensure accurate delineation of lung regions. A Deep Fractal Edge Classifier (DFEC) is introduced, consisting of five fractal blocks with convolutional layers and pooling to progressively learn LC characteristics. The proposed EHTDL model achieves remarkable performance metrics, including 99% accuracy, 100% precision, 98% recall, and 99% <i>F</i>1-score, demonstrating its robustness and effectiveness. The model's scalability and efficiency make it suitable for real-time clinical application offering a promising solution for early LC detection and significantly enhancing patient care.

Neurobehavioral mechanisms of fear and anxiety in multiple sclerosis.

Meyer-Arndt L, Rust R, Bellmann-Strobl J, Schmitz-Hübsch T, Marko L, Forslund S, Scheel M, Gold SM, Hetzer S, Paul F, Weygandt M

pubmed logopapersAug 9 2025
Anxiety is a common yet often underdiagnosed and undertreated comorbidity in multiple sclerosis (MS). While altered fear processing is a hallmark of anxiety in other populations, its neurobehavioral mechanisms in MS remain poorly understood. This study investigates the extent to which neurobehavioral mechanisms of fear generalization contribute to anxiety in MS. We recruited 18 persons with MS (PwMS) and anxiety, 36 PwMS without anxiety, and 23 healthy persons (HPs). Participants completed a functional MRI (fMRI) fear generalization task to assess fear processing and diffusion-weighted MRI for graph-based structural connectome analyses. Consistent with findings in non-MS anxiety populations, PwMS with anxiety exhibit fear overgeneralization, perceiving non-threating stimuli as threatening. A machine learning model trained on HPs in a multivariate pattern analysis (MVPA) cross-decoding approach accurately predicts behavioral fear generalization in both MS groups using whole-brain fMRI fear response patterns. Regional fMRI prediction and graph-based structural connectivity analyses reveal that fear response activity and structural network integrity of partially overlapping areas, such as hippocampus (for fear stimulus comparison) and anterior insula (for fear excitation), are crucial for MS fear generalization. Reduced network integrity in such regions is a direct indicator of MS anxiety. Our findings demonstrate that MS anxiety is substantially characterized by fear overgeneralization. The fact that a machine learning model trained to associate fMRI fear response patterns with fear ratings in HPs predicts fear ratings from fMRI data across MS groups using an MVPA cross-decoding approach suggests that generic fear processing mechanisms substantially contribute to anxiety in MS.

Supporting intraoperative margin assessment using deep learning for automatic tumour segmentation in breast lumpectomy micro-PET-CT.

Maris L, Göker M, De Man K, Van den Broeck B, Van Hoecke S, Van de Vijver K, Vanhove C, Keereman V

pubmed logopapersAug 9 2025
Complete tumour removal is vital in curative breast cancer (BCa) surgery to prevent recurrence. Recently, [<sup>18</sup>F]FDG micro-PET-CT of lumpectomy specimens has shown promise for intraoperative margin assessment (IMA). To aid interpretation, we trained a 2D Residual U-Net to delineate invasive carcinoma of no special type in micro-PET-CT lumpectomy images. We collected 53 BCa lamella images from 19 patients with true histopathology-defined tumour segmentations. Group five-fold cross-validation yielded a dice similarity coefficient of 0.71 ± 0.20 for segmentation. Afterwards, an ensemble model was generated to segment tumours and predict margin status. Comparing predicted and true histopathological margin status in a separate set of 31 micro-PET-CT lumpectomy images of 31 patients achieved an F1 score of 84%, closely matching the mean performance of seven physicians who manually interpreted the same images. This model represents an important step towards a decision-support system that enhances micro-PET-CT-based IMA in BCa, facilitating its clinical adoption.

Collaborative and privacy-preserving cross-vendor united diagnostic imaging via server-rotating federated machine learning.

Wang H, Zhang X, Ren X, Zhang Z, Yang S, Lian C, Ma J, Zeng D

pubmed logopapersAug 9 2025
Federated Learning (FL) is a distributed framework that enables collaborative training of a server model across medical data vendors while preserving data privacy. However, conventional FL faces two key challenges: substantial data heterogeneity among vendors and limited flexibility from a fixed server, leading to suboptimal performance in diagnostic-imaging tasks. To address these, we propose a server-rotating federated learning method (SRFLM). Unlike traditional FL, SRFLM designates one vendor as a provisional server for federated fine-tuning, with others acting as clients. It uses a rotational server-communication mechanism and a dynamic server-election strategy, allowing each vendor to sequentially assume the server role over time. Additionally, the communication protocol of SRFLM provides strong privacy guarantees using differential privacy. We extensively evaluate SRFLM across multiple cross-vendor diagnostic imaging tasks. We envision SRFLM as paving the way to facilitate collaborative model training across medical data vendors, thereby achieving the goal of cross-vendor united diagnostic imaging.

Emerging trends in NanoTheranostics: Integrating imaging and therapy for precision health care.

Fahmy HM, Bayoumi L, Helal NF, Mohamed NRA, Emarh Y, Ahmed AM

pubmed logopapersAug 9 2025
Nanotheranostics has garnered significant interest for its capacity to improve customized healthcare via targeted and efficient treatment alternatives. Nanotheranostics promises an innovative approach to precision medicine by integrating therapeutic and diagnostic capabilities into nanoscale devices. Nanotheranostics provides an integrated approach that improves diagnosis and facilitates real-time, tailored treatment, revolutionizing patient care. Through the application of nanotheranostic devices, outcomes can be modified for patients on an individualized therapeutic level by taking into consideration individual differences in disease manifestation as well as treatment response. In this review, no aspect of imaging in nanotheranostics is excluded, thus including MRI and CT as well as PET and OI, which are essential for comprehensive analysis needed in medical decision making. Integration of AI and ML into theranostics facilitates predicting treatment outcomes and personalizing the approaches to the methods, which significantly enhances reproducibility in medicine. In addition, several nanoparticles such as lipid-based and polymeric particles, iron oxide, quantum dots, and mesoporous silica have shown promise in diagnosis and targeted drug delivery. These nanoparticles are capable of treating multiple diseases such as cancers, some other neurological disorders, and infectious diseases. While having potential, the field of nanotheranostics still encounters issues regarding clinical applicability, alongside some regulatory hurdles pertaining to new therapeutic agents. Advanced research in this sphere is bound to enhance existing perspectives and fundamentally aid the integration of nanomedicine into conventional health procedures, especially relating to efficacy and the growing emphasis on safe, personalized healthcare.

Self-supervised disc and cup segmentation via non-local deformable convolution and adaptive transformer.

Zhao W, Wang Y

pubmed logopapersAug 9 2025
Optic disc and cup segmentation is a crucial subfield of computer vision, playing a pivotal role in automated pathological image analysis. It enables precise, efficient, and automated diagnosis of ocular conditions, significantly aiding clinicians in real-world medical applications. However, due to the scarcity of medical segmentation data and the insufficient integration of global contextual information, the segmentation accuracy remains suboptimal. This issue becomes particularly pronounced in optic disc and cup cases with complex anatomical structures and ambiguous boundaries.In order to address these limitations, this paper introduces a self-supervised training strategy integrated with a newly designed network architecture to improve segmentation accuracy.Specifically,we initially propose a non-local dual deformable convolutional block,which aims to capture the irregular image patterns(i.e. boundary).Secondly,we modify the traditional vision transformer and design an adaptive K-Nearest Neighbors(KNN) transformation block to extract the global semantic context from images. Finally,an initialization strategy based on self-supervised training is proposed to reduce the burden on the network on labeled data.Comprehensive experimental evaluations demonstrate the effectiveness of our proposed method, which outperforms previous networks and achieves state-of-the-art performance,with IOU scores of 0.9577 for the optic disc and 0.8399 for the optic cup on the REFUGE dataset.

Kidney volume after endovascular exclusion of abdominal aortic aneurysms by EVAR and FEVAR.

B S, C V, Turkia J B, Weydevelt E V, R P, F L, A K

pubmed logopapersAug 9 2025
Decreased kidney volume is a sign of renal aging and/or decreased vascularization. The aim of this study was to determine whether renal volume changes 24 months after exclusion of an abdominal aortic aneurysm (AAA), and to compare fenestrated (FEVAR) and subrenal (EVAR) stents. Retrospective single-center study from a prospective registry, including patients between 60 and 80 years with normal preoperative renal function (eGFR≥60 ml/min/1.73 m<sup>-2</sup>) who underwent fenestrated (FEVAR) or infrarenal (EVAR) stent grafts between 2015 and 2021. Patients had to have had an CT scan at 24 months of the study to be included. Exclusion criteria were renal branches, the presence of preoperative renal insufficiency, a single kidney, embolization or coverage of an accessory renal artery, occlusion of a renal artery during follow-up and mention of AAA rupture. Renal volume was measured using sizing software (EndoSize, therenva) based on fully automatic deep-learning segmentation of several anatomical structures (arterial lumen, bone structure, thrombus, heart, etc.), including the kidneys. In the presence of renal cysts, these were manually excluded from the segmentation. Forty-eight patients were included (24 EVAR vs. 24 FEVAR), 96 kidneys were segmented. There was no difference between groups in age (78.9±6.7 years vs. 69.4±6.8, p=0.89), eGFR 85.8 ± 12.4 [62-107] ml/min/1.73 m<sup>-2</sup> vs. 81 ± 16.2 [42-107] (p=0.36), and renal volume 170.9 ± 29.7 [123-276] mL vs. 165.3 ± 37.4 [115-298] (p=0.12). At 24 months in the EVAR group, there was a non-significant reduction in eGFR 84.1 ± 17.2 [61-128] ml/min/1.73 m<sup>-2</sup> vs. 81 ± 16.2 [42-107] (p=0.36) or renal volume 170.9 ± 29.7 [123-276] mL vs. 165.3 ± 37.4 [115-298] (p=0.12). In the FEVAR group, at 24 months there was a non-significant fall in eGFR 84.1 ± 17.2 [61-128] ml/min/1.73 m<sup>-2</sup> vs. 73.8 ± 21.4 [40-110] (p=0.09), while renal volume decreased significantly 182 ± 37.8 [123-293] mL vs. 158.9 ± 40.2 [45-258] (p=0.007). In this study, there appears to be a significant decrease in renal volume without a drop in eGFR 24 months after fenestrated stenting. This decrease may reflect changes in renal perfusion and could potentially be predictive of long-term renal impairment, although this cannot be confirmed within the limits of this small sample. Further studies with long-term follow-up are needed.

From Explainable to Explained AI: Ideas for Falsifying and Quantifying Explanations

Yoni Schirris, Eric Marcus, Jonas Teuwen, Hugo Horlings, Efstratios Gavves

arxiv logopreprintAug 9 2025
Explaining deep learning models is essential for clinical integration of medical image analysis systems. A good explanation highlights if a model depends on spurious features that undermines generalization and harms a subset of patients or, conversely, may present novel biological insights. Although techniques like GradCAM can identify influential features, they are measurement tools that do not themselves form an explanation. We propose a human-machine-VLM interaction system tailored to explaining classifiers in computational pathology, including multi-instance learning for whole-slide images. Our proof of concept comprises (1) an AI-integrated slide viewer to run sliding-window experiments to test claims of an explanation, and (2) quantification of an explanation's predictiveness using general-purpose vision-language models. The results demonstrate that this allows us to qualitatively test claims of explanations and can quantifiably distinguish competing explanations. This offers a practical path from explainable AI to explained AI in digital pathology and beyond. Code and prompts are available at https://github.com/nki-ai/x2x.

Transformer-Based Explainable Deep Learning for Breast Cancer Detection in Mammography: The MammoFormer Framework

Ojonugwa Oluwafemi Ejiga Peter, Daniel Emakporuena, Bamidele Dayo Tunde, Maryam Abdulkarim, Abdullahi Bn Umar

arxiv logopreprintAug 8 2025
Breast cancer detection through mammography interpretation remains difficult because of the minimal nature of abnormalities that experts need to identify alongside the variable interpretations between readers. The potential of CNNs for medical image analysis faces two limitations: they fail to process both local information and wide contextual data adequately, and do not provide explainable AI (XAI) operations that doctors need to accept them in clinics. The researcher developed the MammoFormer framework, which unites transformer-based architecture with multi-feature enhancement components and XAI functionalities within one framework. Seven different architectures consisting of CNNs, Vision Transformer, Swin Transformer, and ConvNext were tested alongside four enhancement techniques, including original images, negative transformation, adaptive histogram equalization, and histogram of oriented gradients. The MammoFormer framework addresses critical clinical adoption barriers of AI mammography systems through: (1) systematic optimization of transformer architectures via architecture-specific feature enhancement, achieving up to 13% performance improvement, (2) comprehensive explainable AI integration providing multi-perspective diagnostic interpretability, and (3) a clinically deployable ensemble system combining CNN reliability with transformer global context modeling. The combination of transformer models with suitable feature enhancements enables them to achieve equal or better results than CNN approaches. ViT achieves 98.3% accuracy alongside AHE while Swin Transformer gains a 13.0% advantage through HOG enhancements
Page 104 of 3513502 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.