Sort by:
Page 15 of 2982974 results

Parental and carer views on the use of AI in imaging for children: a national survey.

Agarwal G, Salami RK, Lee L, Martin H, Shantharam L, Thomas K, Ashworth E, Allan E, Yung KW, Pauling C, Leyden D, Arthurs OJ, Shelmerdine SC

pubmed logopapersAug 9 2025
Although the use of artificial intelligence (AI) in healthcare is increasing, stakeholder engagement remains poor, particularly relating to understanding parent/carer acceptance of AI tools in paediatric imaging. We explore these perceptions and compare them to the opinions of children and young people (CYAP). A UK national online survey was conducted, inviting parents, carers and guardians of children to participate. The survey was "live" from June 2022 to 2023. The survey included questions asking about respondents' views of AI in general, as well as in specific circumstances (e.g. fractures) with respect to children's healthcare. One hundred forty-six parents/carers (mean age = 45; range = 21-80) from all four nations of the UK responded. Most respondents (93/146, 64%) believed that AI would be more accurate at interpreting paediatric musculoskeletal radiographs than healthcare professionals, but had a strong preference for human supervision (66%). Whilst male respondents were more likely to believe that AI would be more accurate (55/72, 76%), they were twice as likely as female parents/carers to believe that AI use could result in their child's data falling into the wrong hands. Most respondents would like to be asked permission before AI is used for the interpretation of their child's scans (104/146, 71%). Notably, 79% of parents/carers prioritised accuracy over speed compared to 66% of CYAP. Parents/carers feel positively about AI for paediatric imaging but strongly discourage autonomous use. Acknowledging the diverse opinions of the patient population is vital in aiding the successful integration of AI for paediatric imaging. Parents/carers demonstrate a preference for AI use with human supervision that prioritises accuracy, transparency and institutional accountability. AI is welcomed as a supportive tool, but not as a substitute for human expertise. Parents/carers are accepting of AI use, with human supervision. Over half believe AI would replace doctors/nurses looking at bone X-rays within 5 years. Parents/carers are more likely than CYAP to trust AI's accuracy. Parents/carers are also more sceptical about AI data misuse.

Enhanced hyper tuning using bioinspired-based deep learning model for accurate lung cancer detection and classification.

Kumari J, Sinha S, Singh L

pubmed logopapersAug 9 2025
Lung cancer (LC) is one of the leading causes of cancer related deaths worldwide and early recognition is critical for enhancing patient outcomes. However, existing LC detection techniques face challenges such as high computational demands, complex data integration, scalability limitations, and difficulties in achieving rigorous clinical validation. This research proposes an Enhanced Hyper Tuning Deep Learning (EHTDL) model utilizing bioinspired algorithms to overcome these limitations and improve accuracy and efficiency of LC detection and classification. The methodology begins with the Smooth Edge Enhancement (SEE) technique for preprocessing CT images, followed by feature extraction using GLCM-based Texture Analysis. To refine the features and reduce dimensionality, a Hybrid Feature Selection approach combining Grey Wolf optimization (GWO) and Differential Evolution (DE) is employed. Precise lung segmentation is performed using Mask R-CNN to ensure accurate delineation of lung regions. A Deep Fractal Edge Classifier (DFEC) is introduced, consisting of five fractal blocks with convolutional layers and pooling to progressively learn LC characteristics. The proposed EHTDL model achieves remarkable performance metrics, including 99% accuracy, 100% precision, 98% recall, and 99% <i>F</i>1-score, demonstrating its robustness and effectiveness. The model's scalability and efficiency make it suitable for real-time clinical application offering a promising solution for early LC detection and significantly enhancing patient care.

Neurobehavioral mechanisms of fear and anxiety in multiple sclerosis.

Meyer-Arndt L, Rust R, Bellmann-Strobl J, Schmitz-Hübsch T, Marko L, Forslund S, Scheel M, Gold SM, Hetzer S, Paul F, Weygandt M

pubmed logopapersAug 9 2025
Anxiety is a common yet often underdiagnosed and undertreated comorbidity in multiple sclerosis (MS). While altered fear processing is a hallmark of anxiety in other populations, its neurobehavioral mechanisms in MS remain poorly understood. This study investigates the extent to which neurobehavioral mechanisms of fear generalization contribute to anxiety in MS. We recruited 18 persons with MS (PwMS) and anxiety, 36 PwMS without anxiety, and 23 healthy persons (HPs). Participants completed a functional MRI (fMRI) fear generalization task to assess fear processing and diffusion-weighted MRI for graph-based structural connectome analyses. Consistent with findings in non-MS anxiety populations, PwMS with anxiety exhibit fear overgeneralization, perceiving non-threating stimuli as threatening. A machine learning model trained on HPs in a multivariate pattern analysis (MVPA) cross-decoding approach accurately predicts behavioral fear generalization in both MS groups using whole-brain fMRI fear response patterns. Regional fMRI prediction and graph-based structural connectivity analyses reveal that fear response activity and structural network integrity of partially overlapping areas, such as hippocampus (for fear stimulus comparison) and anterior insula (for fear excitation), are crucial for MS fear generalization. Reduced network integrity in such regions is a direct indicator of MS anxiety. Our findings demonstrate that MS anxiety is substantially characterized by fear overgeneralization. The fact that a machine learning model trained to associate fMRI fear response patterns with fear ratings in HPs predicts fear ratings from fMRI data across MS groups using an MVPA cross-decoding approach suggests that generic fear processing mechanisms substantially contribute to anxiety in MS.

Supporting intraoperative margin assessment using deep learning for automatic tumour segmentation in breast lumpectomy micro-PET-CT.

Maris L, Göker M, De Man K, Van den Broeck B, Van Hoecke S, Van de Vijver K, Vanhove C, Keereman V

pubmed logopapersAug 9 2025
Complete tumour removal is vital in curative breast cancer (BCa) surgery to prevent recurrence. Recently, [<sup>18</sup>F]FDG micro-PET-CT of lumpectomy specimens has shown promise for intraoperative margin assessment (IMA). To aid interpretation, we trained a 2D Residual U-Net to delineate invasive carcinoma of no special type in micro-PET-CT lumpectomy images. We collected 53 BCa lamella images from 19 patients with true histopathology-defined tumour segmentations. Group five-fold cross-validation yielded a dice similarity coefficient of 0.71 ± 0.20 for segmentation. Afterwards, an ensemble model was generated to segment tumours and predict margin status. Comparing predicted and true histopathological margin status in a separate set of 31 micro-PET-CT lumpectomy images of 31 patients achieved an F1 score of 84%, closely matching the mean performance of seven physicians who manually interpreted the same images. This model represents an important step towards a decision-support system that enhances micro-PET-CT-based IMA in BCa, facilitating its clinical adoption.

Collaborative and privacy-preserving cross-vendor united diagnostic imaging via server-rotating federated machine learning.

Wang H, Zhang X, Ren X, Zhang Z, Yang S, Lian C, Ma J, Zeng D

pubmed logopapersAug 9 2025
Federated Learning (FL) is a distributed framework that enables collaborative training of a server model across medical data vendors while preserving data privacy. However, conventional FL faces two key challenges: substantial data heterogeneity among vendors and limited flexibility from a fixed server, leading to suboptimal performance in diagnostic-imaging tasks. To address these, we propose a server-rotating federated learning method (SRFLM). Unlike traditional FL, SRFLM designates one vendor as a provisional server for federated fine-tuning, with others acting as clients. It uses a rotational server-communication mechanism and a dynamic server-election strategy, allowing each vendor to sequentially assume the server role over time. Additionally, the communication protocol of SRFLM provides strong privacy guarantees using differential privacy. We extensively evaluate SRFLM across multiple cross-vendor diagnostic imaging tasks. We envision SRFLM as paving the way to facilitate collaborative model training across medical data vendors, thereby achieving the goal of cross-vendor united diagnostic imaging.

Artificial intelligence with feature fusion empowered enhanced brain stroke detection and classification for disabled persons using biomedical images.

Alsieni M, Alyoubi KH

pubmed logopapersAug 9 2025
Brain stroke is an illness which affects almost every age group, particularly people over 65. There are two significant kinds of strokes: ischemic and hemorrhagic strokes. Blockage of brain vessels causes an ischemic stroke, while cracks in blood vessels in or around the brain cause a hemorrhagic stroke. In the prompt analysis of brain stroke, patients can live an easier life. Recognizing strokes using medical imaging is crucial for early diagnosis and treatment planning. Conversely, access to innovative imaging methods is restricted, particularly in emerging states, so it is challenging to analyze brain stroke cases of disabled people appropriately. Hence, the development of more accurate, faster, and more reliable diagnostic models for the timely recognition and efficient treatment of ischemic stroke is greatly needed. Artificial intelligence technologies, primarily deep learning (DL), have been widely employed in medical imaging, utilizing automated detection methods. This paper presents an Enhanced Brain Stroke Detection and Classification using Artificial Intelligence with Feature Fusion Technologies (EBSDC-AIFFT) model. This paper aims to develop an enhanced brain stroke detection system for individuals with disabilities, utilizing biomedical images to improve diagnostic accuracy. Initially, the image pre-processing stage involves various steps, including resizing, normalization, data augmentation, and data splitting, to enhance image quality. In addition, the EBSDC-AIFFT model combines the Inception-ResNet-v2 model, the convolutional block attention module-ResNet18 method, and the multi-axis vision transformer technique for feature extraction. Finally, the variational autoencoder (VAE) model is implemented for the classification process. The performance validation of the EBSDC-AIFFT technique is performed under the brain stroke CT image dataset. The comparison study of the EBSDC-AIFFT technique demonstrated a superior accuracy value of 99.09% over existing models.

Deep learning in rib fracture imaging: study quality assessment using the Must AI Criteria-10 (MAIC-10) checklist for artificial intelligence in medical imaging.

Getzmann JM, Nulle K, Mennini C, Viglino U, Serpi F, Albano D, Messina C, Fusco S, Gitto S, Sconfienza LM

pubmed logopapersAug 9 2025
To analyze the methodological quality of studies on deep learning (DL) in rib fracture imaging with the Must AI Criteria-10 (MAIC-10) checklist, and to report insights and experiences regarding the applicability of the MAIC-10 checklist. An electronic literature search was conducted on the PubMed database. After selection of articles, three radiologists independently rated the articles according to MAIC-10. Differences of the MAIC-10 score for each checklist item were assessed using the Fleiss' kappa coefficient. A total of 25 original articles discussing DL applications in rib fracture imaging were identified. Most studies focused on fracture detection (n = 21, 84%). In most of the research papers, internal cross-validation of the dataset was performed (n = 16, 64%), while only six studies (24%) conducted external validation. The mean MAIC-10 score of the 25 studies was 5.63 (SD, 1.84; range 1-8), with the item "clinical need" being reported most consistently (100%) and the item "study design" being most frequently reported incompletely (94.8%). The average inter-rater agreement for the MAIC-10 score was 0.771. The MAIC-10 checklist is a valid tool for assessing the quality of AI research in medical imaging with good inter-rater agreement. With regard to rib fracture imaging, items such as "study design", "explainability", and "transparency" were often not comprehensively addressed. AI in medical imaging has become increasingly common. Therefore, quality control systems of published literature such as the MAIC-10 checklist are needed to ensure high quality research output. Quality control systems are needed for research on AI in medical imaging. The MAIC-10 checklist is a valid tool to assess AI in medical imaging research quality. Checklist items such as "study design", "explainability", and "transparency" are frequently addressed incomprehensively.

Emerging trends in NanoTheranostics: Integrating imaging and therapy for precision health care.

Fahmy HM, Bayoumi L, Helal NF, Mohamed NRA, Emarh Y, Ahmed AM

pubmed logopapersAug 9 2025
Nanotheranostics has garnered significant interest for its capacity to improve customized healthcare via targeted and efficient treatment alternatives. Nanotheranostics promises an innovative approach to precision medicine by integrating therapeutic and diagnostic capabilities into nanoscale devices. Nanotheranostics provides an integrated approach that improves diagnosis and facilitates real-time, tailored treatment, revolutionizing patient care. Through the application of nanotheranostic devices, outcomes can be modified for patients on an individualized therapeutic level by taking into consideration individual differences in disease manifestation as well as treatment response. In this review, no aspect of imaging in nanotheranostics is excluded, thus including MRI and CT as well as PET and OI, which are essential for comprehensive analysis needed in medical decision making. Integration of AI and ML into theranostics facilitates predicting treatment outcomes and personalizing the approaches to the methods, which significantly enhances reproducibility in medicine. In addition, several nanoparticles such as lipid-based and polymeric particles, iron oxide, quantum dots, and mesoporous silica have shown promise in diagnosis and targeted drug delivery. These nanoparticles are capable of treating multiple diseases such as cancers, some other neurological disorders, and infectious diseases. While having potential, the field of nanotheranostics still encounters issues regarding clinical applicability, alongside some regulatory hurdles pertaining to new therapeutic agents. Advanced research in this sphere is bound to enhance existing perspectives and fundamentally aid the integration of nanomedicine into conventional health procedures, especially relating to efficacy and the growing emphasis on safe, personalized healthcare.

Deep Learning-aided <sup>1</sup>H-MR Spectroscopy for Differentiating between Patients with and without Hepatocellular Carcinoma.

Bae JS, Lee HH, Kim H, Song IC, Lee JY, Han JK

pubmed logopapersAug 9 2025
Among patients with hepatitis B virus-associated liver cirrhosis (HBV-LC), there may be differences in the hepatic parenchyma between those with and without hepatocellular carcinoma (HCC). Proton MR spectroscopy (<sup>1</sup>H-MRS) is a well-established tool for noninvasive metabolomics, but has been challenging in the liver allowing only a few metabolites to be detected other than lipids. This study aims to explore the potential of <sup>1</sup>H-MRS of the liver in conjunction with deep learning to differentiate between HBV-LC patients with and without HCC. Between August 2018 and March 2021, <sup>1</sup>H-MRS data were collected from 37 HBV-LC patients who underwent MRI for HCC surveillance, without HCC (HBV-LC group, n = 20) and with HCC (HBV-LC-HCC group, n = 17). Based on a priori knowledge from the first 10 patients from each group, big spectral datasets were simulated to develop 2 kinds of convolutional neural networks (CNNs): CNNs quantifying 15 metabolites and 5 lipid resonances (qCNNs) and CNNs classifying patients into HBV-LC and HBV-LC-HCC (cCNNs). The performance of the cCNNs was assessed using the remaining patients in the 2 groups (10 HBV-LC and 7 HBV-LC-HCC patients). Using a simulated dataset, the quantitative errors with the qCNNs were significantly lower than those with a conventional nonlinear-least-squares-fitting method for all metabolites and lipids (P ≤0.004). The cCNNs exhibited sensitivity, specificity, and accuracy of 100% (7/7), 90% (9/10), and 94% (16/17), respectively, for identifying the HBV-LC-HCC group. Deep-learning-aided <sup>1</sup>H-MRS with data augmentation by spectral simulation may have potential in differentiating between HBV-LC patients with and without HCC.

Self-supervised disc and cup segmentation via non-local deformable convolution and adaptive transformer.

Zhao W, Wang Y

pubmed logopapersAug 9 2025
Optic disc and cup segmentation is a crucial subfield of computer vision, playing a pivotal role in automated pathological image analysis. It enables precise, efficient, and automated diagnosis of ocular conditions, significantly aiding clinicians in real-world medical applications. However, due to the scarcity of medical segmentation data and the insufficient integration of global contextual information, the segmentation accuracy remains suboptimal. This issue becomes particularly pronounced in optic disc and cup cases with complex anatomical structures and ambiguous boundaries.In order to address these limitations, this paper introduces a self-supervised training strategy integrated with a newly designed network architecture to improve segmentation accuracy.Specifically,we initially propose a non-local dual deformable convolutional block,which aims to capture the irregular image patterns(i.e. boundary).Secondly,we modify the traditional vision transformer and design an adaptive K-Nearest Neighbors(KNN) transformation block to extract the global semantic context from images. Finally,an initialization strategy based on self-supervised training is proposed to reduce the burden on the network on labeled data.Comprehensive experimental evaluations demonstrate the effectiveness of our proposed method, which outperforms previous networks and achieves state-of-the-art performance,with IOU scores of 0.9577 for the optic disc and 0.8399 for the optic cup on the REFUGE dataset.
Page 15 of 2982974 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.