Sort by:
Page 17 of 40395 results

Computer-Aided Decision Support Systems of Alzheimer's Disease Diagnosis - A Systematic Review.

Günaydın T, Varlı S

pubmed logopapersJun 3 2025
The incidence of Alzheimer's disease is rising with the increasing elderly population worldwide. While no cure exists, early diagnosis can significantly slow disease progression. Computer-aided diagnostic systems are becoming critical tools for assisting in the early detection of Alzheimer's disease. In this systematic review, we aim to evaluate recent advancements in computer-aided decision support systems for Alzheimer's disease diagnosis, focusing on data modalities, machine learning methods, and performance metrics. We conducted a systematic review following the Preferred Reporting Items for Systematic Reviews and Meta-Analyses guidelines. Studies published between 2021 and 2024 were retrieved from PubMed, IEEEXplore and Web of Science, using search terms related to Alzheimer's disease classification, neuroimaging, machine learning, and diagnostic performance. A total of 39 studies met the inclusion criteria, focusing on the use of Magnetic Resonance Imaging, Positron Emission Tomography, and biomarkers for Alzheimer's disease classification using machine learning models. Multimodal approaches, combining Magnetic Resonance Imaging with Positron Emission Tomography and Cognitive assessments, outperformed single-modality studies in diagnostic accuracy reliability. Convolutional Neural Networks were the most commonly used machine learning models, followed by hybrid models and Random Forest. The highest accuracy reported for binary classification was 100%, while multi-class classification achieved up to 99.98%. Techniques like Synthetic Minority Over-sampling Technique and data augmentation were frequently employed to address data imbalance, improving model generalizability. Our review highlights the advantages of using multimodal data in computer-aided decision support systems for more accurate Alzheimer's disease diagnosis. However, we also identified several limitations, including data imbalance, small sample sizes, and the lack of external validation in most studies. Future research should utilize larger, more diverse datasets, incorporate longitudinal data, and validate models in real-world clinical trials. Additionally, there is a growing need for explainability in machine learning models to ensure they are interpretable and trusted in clinical settings. While computer-aided decision support systems show great promise in improving the early diagnosis of Alzheimer's disease, further work is needed to enhance their robustness, generalizability, and clinical applicability. By addressing these challenges, computer-aided decision support systems could play a pivotal role in the early detection and management of Alzheimer's disease, potentially improving patient outcomes and reducing healthcare costs.

PARADIM: A Platform to Support Research at the Interface of Data Science and Medical Imaging.

Lemaréchal Y, Couture G, Pelletier F, Lefol R, Asselin PL, Ouellet S, Bernard J, Ebrahimpour L, Manem VSK, Topalis J, Schachtner B, Jodogne S, Joubert P, Jeblick K, Ingrisch M, Després P

pubmed logopapersJun 3 2025
This paper describes PARADIM, a digital infrastructure designed to support research at the interface of data science and medical imaging, with a focus on Research Data Management best practices. The platform is built from open-source components and rooted in the FAIR principles through strict compliance with the DICOM standard. It addresses key needs in data curation, governance, privacy, and scalable resource management. Supporting every stage of the data science discovery cycle, the platform offers robust functionalities for user identity and access management, data de-identification, storage, annotation, as well as model training and evaluation. Rich metadata are generated all along the research lifecycle to ensure the traceability and reproducibility of results. PARADIM hosts several medical image collections and allows the automation of large-scale, computationally intensive pipelines (e.g., automatic segmentation, dose calculations, AI model evaluation). The platform fills a gap at the interface of data science and medical imaging, where digital infrastructures are key in the development, evaluation, and deployment of innovative solutions in the real world.

Development and validation of machine learning models for distal instrumentation-related problems in patients with degenerative lumbar scoliosis based on preoperative CT and MRI.

Feng Z, Yang H, Li Z, Zhang X, Hai Y

pubmed logopapersJun 3 2025
This investigation proposes a machine learning framework leveraging preoperative MRI and CT imaging data to predict postoperative complications related to distal instrumentation (DIP) in degenerative lumbar scoliosis patients undergoing long-segment fusion procedures. We retrospectively analyzed 136 patients, categorizing based on the development of DIP. Preoperative MRI and CT scans provided muscle function and bone density data, including the relative gross cross-sectional area and relative functional cross-sectional area of the multifidus, erector spinae, paraspinal extensor, psoas major muscles, the gross muscle fat index and functional muscle fat index, Hounsfield unit values of the lumbosacral region and the lower instrumented vertebra. Predictive factors for DIP were selected through stepwise LASSO regression. The filtered and all factors were incorporated into six machine learning algorithms twice, namely k-nearest neighbors, decision tree, support vector machine, random forest, multilayer perceptron (MLP), and Naïve Bayes, with tenfold cross-validation. Among patients, 16.9% developed DIP, with the multifidus' functional cross-sectional area and lumbosacral region's Hounsfield unit value as significant predictors. The MLP model exhibited superior performance when all predictive factors were input, with an average AUC of 0.98 and recall rate of 0.90. We compared various machine learning algorithms and constructed, trained, and validated predictive models based on muscle function and bone density-related variables obtained from preoperative CT and MRI, which could identify patients with high risk of DIP after long-segment spinal fusion surgery.

Open-PMC-18M: A High-Fidelity Large Scale Medical Dataset for Multimodal Representation Learning

Negin Baghbanzadeh, Sajad Ashkezari, Elham Dolatabadi, Arash Afkanpour

arxiv logopreprintJun 3 2025
Compound figures, which are multi-panel composites containing diverse subfigures, are ubiquitous in biomedical literature, yet large-scale subfigure extraction remains largely unaddressed. Prior work on subfigure extraction has been limited in both dataset size and generalizability, leaving a critical open question: How does high-fidelity image-text alignment via large-scale subfigure extraction impact representation learning in vision-language models? We address this gap by introducing a scalable subfigure extraction pipeline based on transformer-based object detection, trained on a synthetic corpus of 500,000 compound figures, and achieving state-of-the-art performance on both ImageCLEF 2016 and synthetic benchmarks. Using this pipeline, we release OPEN-PMC-18M, a large-scale high quality biomedical vision-language dataset comprising 18 million clinically relevant subfigure-caption pairs spanning radiology, microscopy, and visible light photography. We train and evaluate vision-language models on our curated datasets and show improved performance across retrieval, zero-shot classification, and robustness benchmarks, outperforming existing baselines. We release our dataset, models, and code to support reproducible benchmarks and further study into biomedical vision-language modeling and representation learning.

Open-PMC-18M: A High-Fidelity Large Scale Medical Dataset for Multimodal Representation Learning

Negin Baghbanzadeh, Sajad Ashkezari, Elham Dolatabadi, Arash Afkanpour

arxiv logopreprintJun 3 2025
Compound figures, which are multi-panel composites containing diverse subfigures, are ubiquitous in biomedical literature, yet large-scale subfigure extraction remains largely unaddressed. Prior work on subfigure extraction has been limited in both dataset size and generalizability, leaving a critical open question: How does high-fidelity image-text alignment via large-scale subfigure extraction impact representation learning in vision-language models? We address this gap by introducing a scalable subfigure extraction pipeline based on transformer-based object detection, trained on a synthetic corpus of 500,000 compound figures, and achieving state-of-the-art performance on both ImageCLEF 2016 and synthetic benchmarks. Using this pipeline, we release OPEN-PMC-18M, a large-scale high quality biomedical vision-language dataset comprising 18 million clinically relevant subfigure-caption pairs spanning radiology, microscopy, and visible light photography. We train and evaluate vision-language models on our curated datasets and show improved performance across retrieval, zero-shot classification, and robustness benchmarks, outperforming existing baselines. We release our dataset, models, and code to support reproducible benchmarks and further study into biomedical vision-language modeling and representation learning.

A Review of Intracranial Aneurysm Imaging Modalities, from CT to State-of-the-Art MR.

Allaw S, Khabaz K, Given TC, Montas D, Alcazar-Felix RJ, Srinath A, Kass-Hout T, Carroll TJ, Hurley MC, Polster SP

pubmed logopapersJun 3 2025
Traditional guidance for intracranial aneurysm (IA) management is dichotomized by rupture status. Fundamental to the management of ruptured aneurysm is the detection and treatment of SAH, along with securing the aneurysm by the safest technique. On the other hand, unruptured aneurysms first require a careful assessment of their natural history versus treatment risk, including an imaging assessment of aneurysm size, location, and morphology, along with additional evidence-based risk factors such as smoking, hypertension, and family history. Unfortunately, a large proportion of ruptured aneurysms are in the lower risk size category (<7 mm), putting a premium on discovering a more refined noninvasive biomarker to detect and stratify aneurysm instability before rupture. In this review of aneurysm work-up, we cover the gamut of established imaging modalities (eg, CT, CTA, DSA, FLAIR, 3D TOF-MRA, contrast-enhanced-MRA) as well as more novel MR techniques (MR vessel wall imaging, dynamic contrast-enhanced MRI, computational fluid dynamics). Additionally, we evaluate the current landscape of artificial intelligence software and its integration into diagnostic and risk-stratification pipelines for IAs. These advanced MR techniques, increasingly complemented with artificial intelligence models, offer a paradigm shift by evaluating factors beyond size and morphology, including vessel wall inflammation, permeability, and hemodynamics. Additionally, we provide our institution's scan parameters for many of these modalities as a reference. Ultimately, this review provides an organized, up-to-date summary of the array of available modalities/sequences for IA imaging to help build protocols focused on IA characterization.

A Comparative Performance Analysis of Regular Expressions and an LLM-Based Approach to Extract the BI-RADS Score from Radiological Reports

Dennstaedt, F., Lerch, L., Schmerder, M., Cihoric, N., Cerghetti, G. M., Gaio, R., Bonel, H., Filchenko, I., Hastings, J., Dammann, F., Aebersold, D. M., von Tengg, H., Nairz, K.

medrxiv logopreprintJun 2 2025
BackgroundDifferent Natural Language Processing (NLP) techniques have demonstrated promising results for data extraction from radiological reports. Both traditional rule-based methods like regular expressions (Regex) and modern Large Language Models (LLMs) can extract structured information. However, comparison between these approaches for extraction of specific radiological data elements has not been widely conducted. MethodsWe compared accuracy and processing time between Regex and LLM-based approaches for extracting BI-RADS scores from 7,764 radiology reports (mammography, ultrasound, MRI, and biopsy). We developed a rule-based algorithm using Regex patterns and implemented an LLM-based extraction using the Rombos-LLM-V2.6-Qwen-14b model. A ground truth dataset of 199 manually classified reports was used for evaluation. ResultsThere was no statistically significant difference in the accuracy in extracting BI-RADS scores between Regex and an LLM-based method (accuracy of 89.20% for Regex versus 87.69% for the LLM-based method; p=0.56). Compared to the LLM-based method, Regex processing was more efficient, completing the task 28,120 times faster (0.06 seconds vs. 1687.20 seconds). Further analysis revealed LLMs favored common classifications (particularly BI-RADS value of 2) while Regex more frequently returned "unclear" values. We also could confirm in our sample an already known laterality bias for breast cancer (BI-RADS 6) and detected a slight laterality skew for suspected breast cancer (BI-RADS 5) as well. ConclusionFor structured, standardized data like BI-RADS, traditional NLP techniques seem to be superior, though future work should explore hybrid approaches combining Regex precision for standardized elements with LLM contextual understanding for more complex information extraction tasks.

Current AI technologies in cancer diagnostics and treatment.

Tiwari A, Mishra S, Kuo TR

pubmed logopapersJun 2 2025
Cancer continues to be a significant international health issue, which demands the invention of new methods for early detection, precise diagnoses, and personalized treatments. Artificial intelligence (AI) has rapidly become a groundbreaking component in the modern era of oncology, offering sophisticated tools across the range of cancer care. In this review, we performed a systematic survey of the current status of AI technologies used for cancer diagnoses and therapeutic approaches. We discuss AI-facilitated imaging diagnostics using a range of modalities such as computed tomography, magnetic resonance imaging, positron emission tomography, ultrasound, and digital pathology, highlighting the growing role of deep learning in detecting early-stage cancers. We also explore applications of AI in genomics and biomarker discovery, liquid biopsies, and non-invasive diagnoses. In therapeutic interventions, AI-based clinical decision support systems, individualized treatment planning, and AI-facilitated drug discovery are transforming precision cancer therapies. The review also evaluates the effects of AI on radiation therapy, robotic surgery, and patient management, including survival predictions, remote monitoring, and AI-facilitated clinical trials. Finally, we discuss important challenges such as data privacy, interpretability, and regulatory issues, and recommend future directions that involve the use of federated learning, synthetic biology, and quantum-boosted AI. This review highlights the groundbreaking potential of AI to revolutionize cancer care by making diagnostics, treatments, and patient management more precise, efficient, and personalized.

SASWISE-UE: Segmentation and synthesis with interpretable scalable ensembles for uncertainty estimation.

Chen W, McMillan AB

pubmed logopapersJun 2 2025
This paper introduces an efficient sub-model ensemble framework aimed at enhancing the interpretability of medical deep learning models, thus increasing their clinical applicability. By generating uncertainty maps, this framework enables end-users to evaluate the reliability of model outputs. We developed a strategy to generate diverse models from a single well-trained checkpoint, facilitating the training of a model family. This involves producing multiple outputs from a single input, fusing them into a final output, and estimating uncertainty based on output disagreements. Implemented using U-Net and UNETR models for segmentation and synthesis tasks, this approach was tested on CT body segmentation and MR-CT synthesis datasets. It achieved a mean Dice coefficient of 0.814 in segmentation and a Mean Absolute Error of 88.17 HU in synthesis, improved from 89.43 HU by pruning. Additionally, the framework was evaluated under image corruption and data undersampling, maintaining correlation between uncertainty and error, which highlights its robustness. These results suggest that the proposed approach not only maintains the performance of well-trained models but also enhances interpretability through effective uncertainty estimation, applicable to both convolutional and transformer models in a range of imaging tasks.

Multicycle Dosimetric Behavior and Dose-Effect Relationships in [<sup>177</sup>Lu]Lu-DOTATATE Peptide Receptor Radionuclide Therapy.

Kayal G, Roseland ME, Wang C, Fitzpatrick K, Mirando D, Suresh K, Wong KK, Dewaraja YK

pubmed logopapersJun 2 2025
We investigated pharmacokinetics, dosimetric patterns, and absorbed dose (AD)-effect correlations in [<sup>177</sup>Lu]Lu-DOTATATE peptide receptor radionuclide therapy (PRRT) for metastatic neuroendocrine tumors (NETs) to develop strategies for future personalized dosimetry-guided treatments. <b>Methods:</b> Patients treated with standard [<sup>177</sup>Lu]Lu-DOTATATE PRRT were recruited for serial SPECT/CT imaging. Kidneys were segmented on CT using a deep learning algorithm, and tumors were segmented at each cycle using a SPECT gradient-based tool, guided by radiologist-defined contours on baseline CT/MRI. Dosimetry was performed using an automated workflow that included contour intensity-based SPECT-SPECT registration, generation of Monte Carlo dose-rate maps, and dose-rate fitting. Lesion-level response at first follow-up was evaluated using both radiologic (RECIST and modified RECIST) and [<sup>68</sup>Ga]Ga-DOTATATE PET-based criteria. Kidney toxicity was evaluated based on the estimated glomerular filtration rate (eGFR) at 9 mo after PRRT. <b>Results:</b> Dosimetry was performed after cycle 1 in 30 patients and after all cycles in 22 of 30 patients who completed SPECT/CT imaging after each cycle. Median cumulative tumor (<i>n</i> = 78) AD was 2.2 Gy/GBq (range, 0.1-20.8 Gy/GBq), whereas median kidney AD was 0.44 Gy/GBq (range, 0.25-0.96 Gy/GBq). The tumor-to-kidney AD ratio decreased with each cycle (median, 6.4, 5.7, 4.7, and 3.9 for cycles 1-4) because of a decrease in tumor AD, while kidney AD remained relatively constant. Higher-grade (grade 2) and pancreatic NETs showed a significantly larger drop in AD with each cycle, as well as significantly lower AD and effective half-life (T<sub>eff</sub>), than did low-grade (grade 1) and small intestinal NETs, respectively. T<sub>eff</sub> remained relatively constant with each cycle for both tumors and kidneys. Kidney T<sub>eff</sub> and AD were significantly higher in patients with low eGFR than in those with high eGFR. Tumor AD was not significantly associated with response measures. There was no nephrotoxicity higher than grade 2; however, a significant negative association was found in univariate analyses between eGFR at 9 mo and AD to the kidney, which improved in a multivariable model that also adjusted for baseline eGFR (cycle 1 AD, <i>P</i> = 0.020, adjusted <i>R</i> <sup>2</sup> = 0.57; cumulative AD, <i>P</i> = 0.049, adjusted <i>R</i> <sup>2</sup> = 0.65). The association between percentage change in eGFR and AD to the kidney was also significant in univariate analysis and after adjusting for baseline eGFR (cycle 1 AD, <i>P</i> = 0.006, adjusted <i>R</i> <sup>2</sup> = 0.21; cumulative AD, <i>P</i> = 0.019, adjusted <i>R</i> <sup>2</sup> = 0.21). <b>Conclusion:</b> The dosimetric behavior we report over different cycles and for different NET subgroups can be considered when optimizing PRRT to individual patients. The models we present for the relationship between eGFR and AD have potential for clinical use in predicting renal function early in the treatment course. Furthermore, reported pharmacokinetics for patient subgroups allow more appropriate selection of population parameters to be used in protocols with fewer imaging time points that facilitate more widespread adoption of dosimetry.
Page 17 of 40395 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.