Sort by:
Page 508 of 6386373 results

Pierrick Coupé, Boris Mansencal, Floréal Morandat, Sergio Morell-Ortega, Nicolas Villain, Jose V. Manjón, Vincent Planche

arxiv logopreprintJun 3 2025
INTRODUCTION: Quantification of amyloid plaques (A), neurofibrillary tangles (T2), and neurodegeneration (N) using PET and MRI is critical for Alzheimer's disease (AD) diagnosis and prognosis. Existing pipelines face limitations regarding processing time, variability in tracer types, and challenges in multimodal integration. METHODS: We developed petBrain, a novel end-to-end processing pipeline for amyloid-PET, tau-PET, and structural MRI. It leverages deep learning-based segmentation, standardized biomarker quantification (Centiloid, CenTauR, HAVAs), and simultaneous estimation of A, T2, and N biomarkers. The pipeline is implemented as a web-based platform, requiring no local computational infrastructure or specialized software knowledge. RESULTS: petBrain provides reliable and rapid biomarker quantification, with results comparable to existing pipelines for A and T2. It shows strong concordance with data processed in ADNI databases. The staging and quantification of A/T2/N by petBrain demonstrated good agreement with CSF/plasma biomarkers, clinical status, and cognitive performance. DISCUSSION: petBrain represents a powerful and openly accessible platform for standardized AD biomarker analysis, facilitating applications in clinical research.

Cerrone D, Riccobelli D, Gazzoni S, Vitullo P, Ballarin F, Falco J, Acerbi F, Manzoni A, Zunino P, Ciarletta P

pubmed logopapersJun 3 2025
Glioblastoma is among the most aggressive brain tumors in adults, characterized by patient-specific invasion patterns driven by the underlying brain microstructure. In this work, we present a proof-of-concept for a mathematical model of GBL growth, enabling real-time prediction and patient-specific parameter identification from longitudinal neuroimaging data. The framework exploits a diffuse-interface mathematical model to describe the tumor evolution and a reduced-order modeling strategy, relying on proper orthogonal decomposition, trained on synthetic data derived from patient-specific brain anatomies reconstructed from magnetic resonance imaging and diffusion tensor imaging. A neural network surrogate learns the inverse mapping from tumor evolution to model parameters, achieving significant computational speed-up while preserving high accuracy. To ensure robustness and interpretability, we perform both global and local sensitivity analyses, identifying the key biophysical parameters governing tumor dynamics and assessing the stability of the inverse problem solution. These results establish a methodological foundation for future clinical deployment of patient-specific digital twins in neuro-oncology.

Alyanak B, Çakar İ, Dede BT, Yıldızgören MT, Bağcıer F

pubmed logopapersJun 3 2025
This study aims to evaluate the reliability of plantar fascia thickness measurements performed by ChatGPT-4 using magnetic resonance imaging (MRI) compared to those obtained by an experienced clinician. In this retrospective, single-center study, foot MRI images from the hospital archive were analysed. Plantar fascia thickness was measured under both blinded and non-blinded conditions by an experienced clinician and ChatGPT-4 at two separate time points. Measurement reliability was assessed using the intraclass correlation coefficient (ICC), mean absolute error (MAE), and mean relative error (MRE). A total of 41 participants (32 females, 9 males) were included. The average plantar fascia thickness measured by the clinician was 4.20 ± 0.80 mm and 4.25 ± 0.92 mm under blinded and non-blinded conditions, respectively, while ChatGPT-4's measurements were 6.47 ± 1.30 mm and 6.46 ± 1.31 mm, respectively. Human evaluators demonstrated excellent agreement (ICC = 0.983-0.989), whereas ChatGPT-4 exhibited low reliability (ICC = 0.391-0.432). In thin plantar fascia cases, ChatGPT-4's error rate was higher, with MAE = 2.70 mm, MRE = 77.17 % under blinded conditions, and MAE = 2.91 mm, MRE = 87.02 % under non-blinded conditions. ChatGPT-4 demonstrated lower reliability in plantar fascia thickness measurements compared to an experienced clinician, with increased error rates in thin structures. These findings highlight the limitations of AI-based models in medical image analysis and emphasize the need for further refinement before clinical implementation.

Xia X, Qiu J, Tan Q, Du W, Gou Q

pubmed logopapersJun 3 2025
To develop and evaluate radiomics-based models using contrast-enhanced T1-weighted imaging (CE-T1WI) for the non-invasive differentiation of primary central nervous system lymphoma (PCNSL) and solitary brain metastasis (SBM), aiming to improve diagnostic accuracy and support clinical decision-making. This retrospective study included a cohort of 324 patients pathologically diagnosed with PCNSL (n=115) or SBM (n=209) between January 2014 and December 2024. Tumor regions were manually segmented on CE-T1WI, and a comprehensive set of 1561 radiomic features was extracted. To identify the most important features, a two-step approach for feature selection was utilized, which involved the use of least absolute shrinkage and selection operator (LASSO) regression. Multiple machine learning classifiers were trained and validated to assess diagnostic performance. Model performance was evaluated using area under the curve (AUC), accuracy, sensitivity, and specificity. The effectiveness of the radiomics-based models was further assessed using decision curve analysis, which incorporated a risk threshold of 0.5 to balance both false positives and false negatives. 23 features were identified through LASSO regression. All classifiers demonstrated robust performance in terms of area under the curve (AUC) and accuracy, with 15 out of 20 classifiers achieving AUC values exceeding 0.9. In the 10-fold cross-validation, the artificial neural network (ANN) classifier achieved the highest AUC of 0.9305, followed by the support vector machine with polynomial kernels (SVMPOLY) classifier at 0.9226. Notably, the independent test revealed that the support vector machine with radial basis function (SVMRBF) classifier performed best, with an AUC of 0.9310 and the highest accuracy of 0.8780. The selected models-SVMRBF, SVMPOLY, ensemble learning with LDA (ELDA), ANN, random forest (RF), and grading boost with random undersampling boosting (GBRUSB)-all showed significant clinical utility, with their standardized net benefits (sNBs) surpassing 0.6. These results underline the potential of the radiomics-based models in reliably distinguishing PCNSL from SBM. The application of radiomic-driven models based on CE-T1WI has demonstrated encouraging potential for accurately distinguishing between PCNSL and SBM. The SVMRBF classifier showed the greatest diagnostic efficacy of all the classifiers tested, indicating its potential clinical utility in differential diagnosis.

Yahaya BS, Osman ND, Karim NKA, Appalanaido GK, Isa IS

pubmed logopapersJun 3 2025
Computed tomography (CT) has been widely used as an effective tool for liver imaging due to its high spatial resolution, and ability to differentiate tissue densities, which contributing to comprehensive image analysis. Recent advancements in artificial intelligence (AI) promoted the role of Machine Learning (ML) in managing liver cancers by predicting or classifying tumours using mathematical algorithms. Deep learning (DL), a subset of ML, expanded these capabilities through convolutional neural networks (CNN) that analyse large data automatically. This review examines methods, achievements, limitations, and performance outcomes of ML-based radiomics and DL models for liver malignancies from CT imaging. A systematic search for full-text articles in English on CT radiomics and DL in liver cancer analysis was conducted in PubMed, Scopus, Science Citation Index, and Cochrane Library databases between 2020 and 2024 using the keywords; machine learning, radiomics, deep learning, computed tomography, liver cancer and associated MESH terms. PRISMA guidelines were used to identify and screen studies for inclusion. A total of 49 studies were included consisting of 17 Radiomics, 24 DL, and 8 combined DL/Radiomics studies. Radiomics has been predominantly utilised for predictive analysis, while DL has been extensively applied to automatic liver and tumour segmentation with a surge of a recent increase in studies integrating both techniques. Despite the growing popularity of DL methods, classical radiomics models are still relevant and often preferred over DL methods when performance is similar, due to lower computational and data needs. Performance of models keep improving, but challenges like data scarcity and lack of standardised protocols persists.

Afnouch M, Bougourzi F, Gaddour O, Dornaika F, Ahmed AT

pubmed logopapersJun 3 2025
Artificial Intelligence is transforming medical imaging, particularly in the analysis of bone metastases (BM), a serious complication of advanced cancers. Machine learning and deep learning techniques offer new opportunities to improve detection, recognition, and segmentation of bone metastasis. Yet, challenges such as limited data, interpretability, and clinical validation remain. Following PRISMA guidelines, we reviewed artificial intelligence methods and applications for bone metastasis analysis across major imaging modalities including CT, MRI, PET, SPECT, and bone scintigraphy. The survey includes traditional machine learning models and modern deep learning architectures such as CNNs and transformers. We also examined available datasets and their effect in developing artificial intelligence in this field. Artificial intelligence models have achieved strong performance across tasks and modalities, with Convolutional Neural Network (CNN) and Transformer architectures showing particularly efficient performance across different tasks. However, limitations persist, including data imbalance, overfitting risks, and the need for greater transparency. Clinical translation is also challenged by regulatory and validation hurdles. Artificial intelligence holds strong potential to improve BM diagnosis and streamline radiology workflows. To reach clinical maturity, future work must address data diversity, model explainability, and large-scale validation, which are critical steps for being trusted to be integrated into the oncology care routines.

Westerhoff, M., Gyftopoulos, S., Dane, B., Vega, E., Murdock, D., Lindow, N., Herter, F., Bousabarah, K., Recht, M. P., Bredella, M. A.

medrxiv logopreprintJun 3 2025
BackgroundOsteoporosis is underdiagnosed and undertreated prompting the exploration of opportunistic screening using CT and artificial intelligence (AI). PurposeTo develop a reproducible deep learning-based convolutional neural network to automatically place a 3D region of interest (ROI) in trabecular bone, develop a correction method to normalize attenuation across different CT protocols or and scanner models, and to establish thresholds for osteoporosis in a large diverse population. MethodsA deep learning-based method was developed to automatically quantify trabecular attenuation using a 3D ROI of the thoracic and lumbar spine on chest, abdomen, or spine CTs, adjusted for different tube voltages and scanner models. Normative values, thresholds for osteoporosis of trabecular attenuation of the spine were established across a diverse population, stratified by age, sex, race, and ethnicity using reported prevalence of osteoporosis by the WHO. Results538,946 CT examinations from 283,499 patients (mean age 65 years{+/-}15, 51.2% women and 55.5% White), performed on 50 scanner models using six different tube voltages were analyzed. Hounsfield Units at 80 kVp versus 120 kVp differed by 23%, and different scanner models resulted in differences of values by < 10%. Automated ROI placement of 1496 vertebra was validated by manual radiologist review, demonstrating >99% agreement. Mean trabecular attenuation was higher in young women (<50 years) than young men (p<.001) and decreased with age, with a steeper decline in postmenopausal women. In patients older than 50 years, trabecular attention was higher in males than females (p<.001). Trabecular attenuation was highest in Blacks, followed by Asians and lowest in Whites (p<.001). The threshold for L1 in diagnosing osteoporosis was 80 HU. ConclusionDeep learning-based automated opportunistic osteoporosis screening can identify patients with low bone mineral density that undergo CT scans for clinical purposes on different scanners and protocols. Key Results 3 main results/conclusionsO_LIIn a study of 538,946 CT examinations performed in 283,499 patients using different scanner models and imaging protocols, an automated deep learning-based convolutional neural network was able to accurately place a three-dimensional regions of interest within thoracic and lumbar vertebra to measure trabecular attenuation. C_LIO_LITube voltage had a larger influence on attenuation values (23%) than scanner model (<10%). C_LIO_LIA threshold of 80 HU was identified for L1 to diagnose osteoporosis using an automated three-dimensional region of interest. C_LI

Chen W, McMillan AB

pubmed logopapersJun 2 2025
This paper introduces an efficient sub-model ensemble framework aimed at enhancing the interpretability of medical deep learning models, thus increasing their clinical applicability. By generating uncertainty maps, this framework enables end-users to evaluate the reliability of model outputs. We developed a strategy to generate diverse models from a single well-trained checkpoint, facilitating the training of a model family. This involves producing multiple outputs from a single input, fusing them into a final output, and estimating uncertainty based on output disagreements. Implemented using U-Net and UNETR models for segmentation and synthesis tasks, this approach was tested on CT body segmentation and MR-CT synthesis datasets. It achieved a mean Dice coefficient of 0.814 in segmentation and a Mean Absolute Error of 88.17 HU in synthesis, improved from 89.43 HU by pruning. Additionally, the framework was evaluated under image corruption and data undersampling, maintaining correlation between uncertainty and error, which highlights its robustness. These results suggest that the proposed approach not only maintains the performance of well-trained models but also enhances interpretability through effective uncertainty estimation, applicable to both convolutional and transformer models in a range of imaging tasks.

Nanammal V, Rajalakshmi S, Remya V, Ranjith S

pubmed logopapersJun 2 2025
In modern healthcare, telemedicine, health records, and AI-driven diagnostics depend on medical image watermarking to secure chest X-rays for pneumonia diagnosis, ensuring data integrity, confidentiality, and authenticity. A 2024 study found over 70 % of healthcare institutions faced medical image data breaches. Yet, current methods falter in imperceptibility, robustness against attacks, and deployment efficiency. ViTU-Net integrates cutting-edge techniques to address these multifaceted challenges in medical image security and analysis. The model's core component, the Vision Transformer (ViT) encoder, efficiently captures global dependencies and spatial information, while the U-Net decoder enhances image reconstruction, with both components leveraging the Adaptive Hierarchical Spatial Attention (AHSA) module for improved spatial processing. Additionally, the patch-based LSB embedding mechanism ensures focused embedding of reversible fragile watermarks within each patch of the segmented non-diagnostic region (RONI), guided dynamically by adaptive masks derived from the attention mechanism, minimizing impact on diagnostic accuracy while maximizing precision and ensuring optimal utilization of spatial information. The hybrid meta-heuristic optimization algorithm, TuniBee Fusion, dynamically optimizes watermarking parameters, striking a balance between exploration and exploitation, thereby enhancing watermarking efficiency and robustness. The incorporation of advanced cryptographic techniques, including SHA-512 hashing and AES encryption, fortifies the model's security, ensuring the authenticity and confidentiality of watermarked medical images. A PSNR value of 60.7 dB, along with an NCC value of 0.9999 and an SSIM value of 1.00, underscores its effectiveness in preserving image quality, security, and diagnostic accuracy. Robustness analysis against a spectrum of attacks validates ViTU-Net's resilience in real-world scenarios.

Zhao, Y., Alizadeh, E., Taha, H. B., Liu, Y., Xu, M., Mahoney, J. M., Li, S.

biorxiv logopreprintJun 2 2025
Deep learning models trained with spatial omics data uncover complex patterns and relationships among cells, genes, and proteins in a high-dimensional space. State-of-the-art in silico spatial multi-cell gene expression methods using histological images of tissue stained with hematoxylin and eosin (H&E) allow us to characterize cellular heterogeneity. We developed a vision transformer (ViT) framework to map histological signatures to spatial single-cell transcriptomic signatures, named SPiRiT. SPiRiT predicts single-cell spatial gene expression using the matched H&E image tiles of human breast cancer and whole mouse pup, evaluated by Xenium (10x Genomics) datasets. Importantly, SPiRiT incorporates rigorous strategies to ensure reproducibility and robustness of predictions and provides trustworthy interpretation through attention-based model explainability. SPiRiT model interpretation revealed the areas, and attention details it uses to predict gene expressions like marker genes in invasive cancer cells. In an apple-to-apple comparison with ST-Net, SPiRiT improved the predictive accuracy by 40%. These gene predictions and expression levels were highly consistent with the tumor region annotation. In summary, SPiRiT highlights the feasibility to infer spatial single-cell gene expression using tissue morphology in multiple-species.
Page 508 of 6386373 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.