Sort by:
Page 142 of 6486473 results

Stogiannos N, Skelton E, van Leeuwen KG, Edgington S, Shelmerdine SC, Malamateniou C

pubmed logopapersSep 23 2025
To explore the perspectives of AI vendors on the integration of AI in medical imaging and oncology clinical practice. An online survey was created on Qualtrics, comprising 23 closed and 5 open-ended questions. This was administered through social media, personalised emails, and the channels of the European Society of Medical Imaging Informatics and Health AI Register, to all those working at a company developing or selling accredited AI solutions for medical imaging and oncology. Quantitative data were analysed using SPSS software, version 28.0. Qualitative data were summarised using content analysis on NVivo, version 14. In total, 83 valid responses were received, with participants having a global distribution and diverse roles and professional backgrounds (business/management/clinical practitioners/engineers/IT, etc). The respondents mentioned the top enablers (practitioner acceptance, business case of AI applications, explainability) and challenges (new regulations, practitioner acceptance, business case) of AI implementation. Co-production with end-users was confirmed as a key practice by most (52.9%). The respondents recognised infrastructure issues within clinical settings (64.1%), lack of clinician engagement (54.7%), and lack of financial resources (42.2%) as key challenges in meeting customer expectations. They called for appropriate reimbursement, robust IT support, clinician acceptance, rigorous regulation, and adequate user training to ensure the successful integration of AI into clinical practice. This study highlights that people, infrastructure, and funding are fundamentals of AI implementation. AI vendors wish to work closely with regulators, patients, clinical practitioners, and other key stakeholders, to ensure a smooth transition of AI into daily practice. Question AI vendors' perspectives on unmet needs, challenges, and opportunities for AI adoption in medical imaging are largely underrepresented in recent research. Findings Provision of consistent funding, optimised infrastructure, and user acceptance were highlighted by vendors as key enablers of AI implementation. Clinical relevance Vendors' input and collaboration with clinical practitioners are necessary to clinically implement AI. This study highlights real-world challenges that AI vendors face and opportunities they value during AI implementation. Keeping the dialogue channels open is key to these collaborations.

He X, Wang L, Yang Q, Wang J, Xing Z, Cao D, Cai C, Cai S

pubmed logopapersSep 23 2025
<b>Objective</b>: Pharmacokinetic (PK) parameters derived from dynamic contrast-enhanced magnetic resonance imaging (DCE-MRI) provide quantitative characterization of tissue perfusion and permeability. However, existing deep learning methods for PK parameter estimation rely on either temporal or spatial features alone, overlooking the integrated spatial-temporal characteristics of DCE-MRI data. This study aims to remove this barrier by fully leveraging the spatial and temporal information to improve parameter estimation.&#xD;<b>Approach</b>: A spatial-temporal information-driven unsupervised deep learning method (STUDE) was proposed. STUDE combines convolutional neural networks (CNNs) and a customized Vision Transformer (ViT) to separately capture spatial and temporal features, enabling comprehensive modelling of contrast agent dynamics and tissue heterogeneity. Besides, a spatial-temporal attention (STA) feature fusion module was proposed to enable adaptive focus on both dimensions for more effective feature fusion. Moreover, the extended Tofts model imposed physical constraints on PK parameter estimation, enabling unsupervised training of STUDE. The accuracy and diagnostic value of STUDE was compared with the orthodox non-linear least squares (NLLS) and representative deep learning-based methods (i.e., GRU, CNN, U-Net, and VTDCE-Net) on a numerical brain phantom and 87 glioma patients, respectively.&#xD;<b>Main results</b>: On the numerical brain phantom, STUDE produced PK parameter maps with the lowest systematic and random errors even under low SNR conditions (SNR = 10 dB). On glioma data, STUDE generated parameter maps with reduced noise compared to NLLS and demonstrated superior structural clarity compared to other methods. Furthermore, STUDE outshined all other methods in the identification of glioma isocitrate dehydrogenase (IDH) mutation status, achieving the area under the curve (AUC) values at 0.840 and 0.908 for the receiver operating characteristic curves of<i>K<sup>trans</sup></i>and<i>V<sub>e</sub></i>, respectively. A combination of all PK parameters improved AUC to 0.926.&#xD;<b>Significance</b>: STUDE advances spatial-temporal information-driven and physics-informed learning for precise PK parameter estimation, demonstrating its potential clinical significance.&#xD.

M M, G S, Bendre M, Nirmal M

pubmed logopapersSep 23 2025
Brain tumors represent a significant neurological challenge, affecting individuals across all age groups. Accurate and timely diagnosis of tumor types is critical for effective treatment planning. Magnetic Resonance Imaging (MRI) remains a primary diagnostic modality due to its non-invasive nature and ability to provide detailed brain imaging. However, traditional tumor classification relies on expert interpretation, which is time-consuming and prone to subjectivity. This study proposes a novel deep learning architecture, the Dual-Feature Cross-Fusion Network (DF-CFN), for the automated classification of brain tumors using MRI data. The model integrates ConvNeXt for capturing global contextual features and a shallow CNN combined with Feature Channel Attention Network (FcaNet) for extracting local features. These are fused through a cross-feature fusion mechanism for improved classification. The model is trained and validated using a Kaggle dataset encompassing four tumor classes (glioma, meningioma, pituitary, and non-tumor), achieving an accuracy of 99.33%. Its generalizability is further confirmed using the Figshare dataset, yielding 99.22% accuracy. Comparative analyses with baseline and recent models validate the superiority of DF-CFN in terms of precision and robustness. This approach demonstrates strong potential for assisting clinicians in reliable brain tumor classification, thereby improving diagnostic efficiency and reducing the burden on healthcare professionals.

Tkachenko M, Huber B, Hamotskyi S, Jansen-Winkeln B, Gockel I, Neumuth T, Köhler H, Maktabi M

pubmed logopapersSep 23 2025
This study compares various preprocessing techniques for hyperspectral deep learning-based cancer diagnostics. The study considers different spectrum scaling and noise reduction options across spatial and spectral axes of hyperspectral datacubes, as well varying levels of blood and light reflections removal. We also examine how the size of the patches extracted from the hyperspectral data affects the models' performance. We additionally explore various strategies to mitigate our dataset's imbalance (where cancerous tissues are underrepresented). Our results indicate that. Scaling: Standardization significantly improves both sensitivity and specificity compared to Normalization. Larger input patch sizes enhance performance by capturing more spatial context. Noise reduction unexpectedly degrades performance. Blood filtering is more effective than filtering reflected light pixels, although neither approach produces significant results. By carefully maintaining consistent testing conditions, we ensure a fair comparison across preprocessing methods and reproducibility. Our findings highlight the necessity of careful preprocessing selection to maximize deep learning performance in medical imaging applications.

Whiteside DJ, Rouse MA, Jones PS, Coyle-Gilchrist I, Murley AG, Stockton K, Hughes LE, Bethlehem RAI, Warrier V, Lambon Ralph MA, Rittman T, Rowe JB

pubmed logopapersSep 23 2025
People with semantic dementia (SD) or semantic variant primary progressive aphasia typically present with marked atrophy of the anterior temporal lobe, and thereafter progress more slowly than other forms of frontotemporal dementia. This suggests a prolonged prodromal phase with accumulation of neuropathology and minimal symptoms, about which little is known. To study early and presymptomatic SD, we first examine a well-characterised cohort of people with SD recruited from the Cambridge Centre for Frontotemporal Dementia. Five people with early SD had coincidental MRI prior to the onset of symptoms, or were healthy volunteers in research with anterior temporal lobe atrophy as an incidental finding. We model longitudinal imaging changes in left- and right-lateralised SD to predict atrophy at symptom onset. We then assess 61,203 participants with structural brain MRI in the UK Biobank to find individuals with imaging changes in keeping with SD but with no neurodegenerative diagnosis. To identify these individuals in UK Biobank, we design an ensemble-based classifier, differentiating baseline structural MRI in SD from healthy controls and patients with other neurodegenerative diseases, including other causes of frontotemporal lobar degeneration. We train the classifier on a Cambridge-based cohort (SD n=47, other neurodegenerative diseases n=498, healthy controls n=88) and test it on a combined cohort from the Neuroimaging in Frontotemporal Dementia study and the Alzheimer's Disease Neuroimaging Initiative (SD n=42, other neurodegenerative n=449, healthy control n=127). From our case series, we find people with marked atrophy three to five years before recognition of symptom onset in left- or right-predominant SD. We present right-lateralised cases with subtle multimodal semantic impairment, found concurrently with only mild behavioural disturbance. We show that imaging measures can be used to reliably and accurately differentiate clinical SD from other neurodegenerative diseases (recall 0.88, precision 0.95, F1 score 0.91). We find individuals with no neurodegenerative diagnosis in the UK Biobank with striking left-lateralised (prevalence ages 45-85 4.8/100,000) or right-lateralised (5.9/100,000) anterior temporal lobe atrophy, with deficits on cognitive testing suggestive of semantic impairment. These individuals show progressive involvement of other cognitive domains in longitudinal follow-up. Together, our findings suggest that (i) there is a burden of incipient early anterior temporal lobe atrophy in older populations, with comparable prevalence of left- and right-sided cases from this prospective unbiased approach to identification, (ii) substantial atrophy is required for manifest symptoms, particularly in right-lateralised cases, and (iii) semantic deficits across multiple domains can be detected in the early symptomatic phase.

Cao Y, Qin T, Liu Y

pubmed logopapersSep 23 2025
Accurate ischemic stroke lesion segmentation is useful to define the optimal reperfusion treatment and unveil the stroke etiology. Despite the importance of diffusion-weighted MRI (DWI) for stroke diagnosis, learning from multi-sequence MRI images like apparent diffusion coefficient (ADC) can capitalize on the complementary nature of information from various modalities and show strong potential to improve the performance of segmentation. However, existing deep learning-based methods require large amounts of well-annotated data from multiple modalities for training, while acquiring such datasets is often impractical. We conduct the exploration of semi-supervised stroke lesion segmentation from multi-sequence MRI images by utilizing unlabeled data to improve performance using limited annotation and propose a novel framework by exploiting cross-modality collaboration and discrepancy to efficiently utilize unlabeled data. Specifically, we adopt a cross-modal bidirectional copy-paste strategy to enable information collaboration between different modalities and a cross-modal discrepancy-informed correction strategy to efficiently learn from limited labeled multi-sequence MRI data and abundant unlabeled data. Extensive experiments on the ischemic stroke lesion segmentation (ISLES 22) dataset demonstrate that our method efficiently utilizes unlabeled data with 12.32% DSC improvements compared with a supervised baseline using 10% annotations and outperforms existing semi-supervised segmentation methods with better performance.

Nag MK, Sadhu AK, Das S, Kumar C, Choudhary S

pubmed logopapersSep 23 2025
Segmenting ischemic stroke lesions from Non-Contrast CT (NCCT) scans is a complex task due to the hypo-intense nature of these lesions compared to surrounding healthy brain tissue and their iso-intensity with lateral ventricles in many cases. Identifying early acute ischemic stroke lesions in NCCT remains particularly challenging. Computer-assisted detection and segmentation can serve as valuable tools to support clinicians in stroke diagnosis. This paper introduces CoAt U SegNet, a novel deep learning model designed to detect and segment acute ischemic stroke lesions from NCCT scans. Unlike conventional 3D segmentation models, this study presents an advanced 3D deep learning approach to enhance delineation accuracy. Traditional machine learning models have struggled to achieve satisfactory segmentation performance, highlighting the need for more sophisticated techniques. For model training, 50 NCCT scans were used, with 10 scans for validation and 500 scans for testing. The encoder convolution blocks incorporated dilation rates of 1, 3, and 5 to capture multi-scale features effectively. Performance evaluation on 500 unseen NCCT scans yielded a Dice similarity score of 75% and a Jaccard index of 70%, demonstrating notable improvement in segmentation accuracy. An enhanced similarity index was employed to refine lesion segmentation, which can further aid in distinguishing the penumbra from the core infarct area, contributing to improved clinical decision-making.

He Z, McMillan AB

pubmed logopapersSep 23 2025
The application of artificial intelligence (AI) in medical imaging has revolutionized diagnostic practices, enabling advanced analysis and interpretation of radiological data. This study presents a comprehensive evaluation of radiomics-based and deep learning-based approaches for disease detection in chest radiography, focusing on COVID-19, lung opacity, and viral pneumonia. While deep learning models, particularly convolutional neural networks (CNNs) and vision transformers (ViTs), learn directly from image data, radiomics-based models extract handcrafted features, offering potential advantages in data-limited scenarios. We systematically compared the diagnostic performance of various AI models, including Decision Trees, Gradient Boosting, Random Forests, Support Vector Machines (SVMs), and Multi-Layer Perceptrons (MLPs) for radiomics, against state-of-the-art deep learning models such as InceptionV3, EfficientNetL, and ConvNeXtXLarge. Performance was evaluated across multiple sample sizes. At 24 samples, EfficientNetL achieved an AUC of 0.839, outperforming SVM (AUC = 0.762). At 4000 samples, InceptionV3 achieved the highest AUC of 0.996, compared to 0.885 for Random Forest. A Scheirer-Ray-Hare test confirmed significant main and interaction effects of model type and sample size on all metrics. Post hoc Mann-Whitney U tests with Bonferroni correction further revealed consistent performance advantages for deep learning models across most conditions. These findings provide statistically validated, data-driven recommendations for model selection in diagnostic AI. Deep learning models demonstrated higher performance and better scalability with increasing data availability, while radiomics-based models may remain useful in low-data contexts. This study addresses a critical gap in AI-based diagnostic research by offering practical guidance for deploying AI models across diverse clinical environments.

Lin J, Luo J, Luo Y, Zhuang Y, Mo T, Wen S, Chen T, Yun G, Zeng H

pubmed logopapersSep 23 2025
To develop an accessible model integrating clinical, MRI, and radiomic features to predict periventricular leukomalacia (PVL) in high-risk infants. Two hundred and seventeen infants (2015-2022) with suspected motor abnormalities, stratified into training (n = 124), internal validation (n = 31), and external validation (n = 62) cohorts by MRI scanners. Radiomic features were extracted from white matter regions on axial sequences. Feature selection employed T-tests, correlation filtering, Random Forest, and LASSO regression. Multivariate logistic models were evaluated by receiver operating characteristic (ROC), accuracy, sensitivity, specificity, positive predictive value, negative predictive value, calibration, decision curve analysis (DCA), net reclassification index (NRI), and integrated discrimination improvement (IDI). Clinical predictors (gestational age, neonatal hypoglycemia, hypoxic-ischemic events, infection) and MRI features (dilated lateral ventricle, delayed myelination, and periventricular abnormal signal) were retained through univariate and multivariate screening. Five clinical predictive models, including clinical model (Model C), MRI model (Model M), Clinical + MRI model (Model C + M), radiomic model and Clinical + MRI + Radiomics model (Model C + M + R), were developed and validated using internal testing, bootstrapping, and external cohorts. Among them, Model C + M + R achieved the best overall performance, with an area under curve (AUC) of 0.96 (95% CI: 0.90-1.00), accuracy of 0.87 (95% CI: 0.76-0.94), sensitivity of 0.88, specificity of 0.85, PPV of 0.96, and NPV of 0.65 in the external validation cohort. Comparison with Model C + M, Model C + M + R demonstrated significant reclassification (NRI = 0.631, p < 0.001) and discrimination improvements (IDI = 0.037, p = 0.020). Conventional MRI-derived radiomics enhances PVL risk stratification. Interpretable accessible model for clinical use provides a new tool for high-risk infant evaluation. Question Periventricular leukomalacia requires early identification to optimize neurorehabilitation. Early white matter injury in infants is challenging to identify through conventional MRI visual assessment. Findings The clinical-MRI-radiomic model demonstrates the best performance for predicting PVL, with an AUC of 0.93 in the training and 0.96 in the external validation cohort. Clinical relevance An accessible and interpretable predictive tool for periventricular leukomalacia prediction has been developed and validated, which may enable earlier targeted interventions.

Bounias D, Simons L, Baumgartner M, Ehring C, Neher P, Kapsner LA, Kovacs B, Floca R, Jaeger PF, Eberle J, Hadler D, Laun FB, Ohlmeyer S, Maier-Hein L, Uder M, Wenkel E, Maier-Hein KH, Bickelhaupt S

pubmed logopapersSep 23 2025
Breast diffusion-weighted imaging (DWI) has shown potential as a standalone imaging technique for certain indications, eg, supplemental screening of women with dense breasts. This study evaluates an artificial intelligence (AI)-powered computer-aided diagnosis (CAD) system for clinical interpretation and workload reduction in breast DWI. This retrospective IRB-approved study included: n = 824 examinations for model development (2017-2020) and n = 235 for evaluation (01/2021-06/2021). Readings were performed by three readers using either the AI-CAD or manual readings. BI-RADS-like (Breast Imaging Reporting and Data System) classification was based on DWI. Histopathology served as ground truth. The model was nnDetection-based, trained using 5-fold cross-validation and ensembling. Statistical significance was determined using McNemar's test. Inter-rater agreement was calculated using Cohen's kappa. Model performance was calculated using the area under the receiver operating curve (AUC). The AI-augmented approach significantly reduced BI-RADS-like 3 calls in breast DWI by 29% (P =.019) and increased interrater agreement (0.57 ± 0.10 vs 0.49 ± 0.11), while preserving diagnostic accuracy. Two of the three readers detected more malignant lesions (63/69 vs 59/69 and 64/69 vs 62/69) with the AI-CAD. The AI model achieved an AUC of 0.78 (95% CI: [0.72, 0.85]; P <.001), which increased for women at screening age to 0.82 (95% CI: [0.73, 0.90]; P <.001), indicating a potential for workload reduction of 20.9% at 96% sensitivity. Breast DWI might benefit from AI support. In our study, AI showed potential for reduction of BI-RADS-like 3 calls and increase of inter-rater agreement. However, given the limited study size, further research is needed.
Page 142 of 6486473 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.