Sort by:
Page 133 of 1521519 results

Development of a No-Reference CT Image Quality Assessment Method Using RadImageNet Pre-trained Deep Learning Models.

Ohashi K, Nagatani Y, Yamazaki A, Yoshigoe M, Iwai K, Uemura R, Shimomura M, Tanimura K, Ishida T

pubmed logopapersMay 27 2025
Accurate assessment of computed tomography (CT) image quality is crucial for ensuring diagnostic accuracy, optimizing imaging protocols, and preventing excessive radiation exposure. In clinical settings, where high-quality reference images are often unavailable, developing no-reference image quality assessment (NR-IQA) methods is essential. Recently, CT-NR-IQA methods using deep learning have been widely studied; however, significant challenges remain in handling multiple degradation factors and accurately reflecting real-world degradations. To address these issues, we propose a novel CT-NR-IQA method. Our approach utilizes a dataset that combines two degradation factors (noise and blur) to train convolutional neural network (CNN) models capable of handling multiple degradation factors. Additionally, we leveraged RadImageNet pre-trained models (ResNet50, DenseNet121, InceptionV3, and InceptionResNetV2), allowing the models to learn deep features from large-scale real clinical images, thus enhancing adaptability to real-world degradations without relying on artificially degraded images. The models' performances were evaluated by measuring the correlation between the subjective scores and predicted image quality scores for both artificially degraded and real clinical image datasets. The results demonstrated positive correlations between the subjective and predicted scores for both datasets. In particular, ResNet50 showed the best performance, with a correlation coefficient of 0.910 for the artificially degraded images and 0.831 for the real clinical images. These findings indicate that the proposed method could serve as a potential surrogate for subjective assessment in CT-NR-IQA.

Deep learning-based CAD system for Alzheimer's diagnosis using deep downsized KPLS.

Neffati S, Mekki K, Machhout M

pubmed logopapersMay 27 2025
Alzheimer's disease (AD) is the most prevalent type of dementia. It is linked with a gradual decline in various brain functions, such as memory. Many research efforts are now directed toward non-invasive procedures for early diagnosis because early detection greatly benefits the patient care and treatment outcome. Additional to an accurate diagnosis and reduction of the rate of misdiagnosis; Computer-Aided Design (CAD) systems are built to give definitive diagnosis. This paper presents a novel CAD system to determine stages of AD. Initially, deep learning techniques are utilized to extract features from the AD brain MRIs. Then, the extracted features are reduced using a proposed feature reduction technique named Deep Downsized Kernel Partial Least Squares (DDKPLS). The proposed approach selects a reduced number of samples from the initial information matrix. The samples chosen give rise to a new data matrix further processed by KPLS to deal with the high dimensionality. The reduced feature space is finally classified using ELM. The implementation is named DDKPLS-ELM. Reference tests have been performed on the Kaggle MRI dataset, which exhibit the efficacy of the DDKPLS-based classifier; it achieves accuracy up to 95.4% and an F1 score of 95.1%.

Machine learning-driven imaging data for early prediction of lung toxicity in breast cancer radiotherapy.

Ungvári T, Szabó D, Győrfi A, Dankovics Z, Kiss B, Olajos J, Tőkési K

pubmed logopapersMay 27 2025
One possible adverse effect of breast irradiation is the development of pulmonary fibrosis. The aim of this study was to determine whether planning CT scans can predict which patients are more likely to develop lung lesions after treatment. A retrospective analysis of 242 patient records was performed using different machine learning models. These models showed a remarkable correlation between the occurrence of fibrosis and the hounsfield units of lungs in CT data. Three different classification methods (Tree, Kernel-based, k-Nearest Neighbors) showed predictive values above 60%. The human predictive factor (HPF), a mathematical predictive model, further strengthened the association between lung hounsfield unit (HU) metrics and radiation-induced lung injury (RILI). These approaches optimize radiation treatment plans to preserve lung health. Machine learning models and HPF can also provide effective diagnostic and therapeutic support for other diseases.

China Protocol for early screening, precise diagnosis, and individualized treatment of lung cancer.

Wang C, Chen B, Liang S, Shao J, Li J, Yang L, Ren P, Wang Z, Luo W, Zhang L, Liu D, Li W

pubmed logopapersMay 27 2025
Early screening, diagnosis, and treatment of lung cancer are pivotal in clinical practice since the tumor stage remains the most dominant factor that affects patient survival. Previous initiatives have tried to develop new tools for decision-making of lung cancer. In this study, we proposed the China Protocol, a complete workflow of lung cancer tailored to the Chinese population, which is implemented by steps including early screening by evaluation of risk factors and three-dimensional thin-layer image reconstruction technique for low-dose computed tomography (Tre-LDCT), accurate diagnosis via artificial intelligence (AI) and novel biomarkers, and individualized treatment through non-invasive molecule visualization strategies. The application of this protocol has improved the early diagnosis and 5-year survival rates of lung cancer in China. The proportion of early-stage (stage I) lung cancer has increased from 46.3% to 65.6%, along with a 5-year survival rate of 90.4%. Moreover, especially for stage IA1 lung cancer, the diagnosis rate has improved from 16% to 27.9%; meanwhile, the 5-year survival rate of this group achieved 97.5%. Thus, here we defined stage IA1 lung cancer, which cohort benefits significantly from early diagnosis and treatment, as the "ultra-early stage lung cancer", aiming to provide an intuitive description for more precise management and survival improvement. In the future, we will promote our findings to multicenter remote areas through medical alliances and mobile health services with the desire to move forward the diagnosis and treatment of lung cancer.

Dose calculation in nuclear medicine with magnetic resonance imaging images using Monte Carlo method.

Vu LH, Thao NTP, Trung NT, Hau PVT, Hong Loan TT

pubmed logopapersMay 27 2025
In recent years, scientists have been trying to convert magnetic resonance imaging (MRI) images into computed tomography (CT) images for dose calculations while taking advantage of the benefits of MRI images. The main approaches for image conversion are bulk density, Atlas registration, and machine learning. These methods have limitations in accuracy and time consumption and require large datasets to convert images. In this study, the novel 'voxels spawn voxels' technique combined with the 'orthonormalize' feature in Carimas software was developed to build a conversion dataset from MRI intensity to Hounsfield unit value for some structural regions including gluteus maximus, liver, kidneys, spleen, pancreas, and colon. The original CT images and the converted MRI images were imported into the Geant4/Gamos software for dose calculation. It gives good results (<5%) in most organs except the intestine (18%).

STA-Risk: A Deep Dive of Spatio-Temporal Asymmetries for Breast Cancer Risk Prediction

Zhengbo Zhou, Dooman Arefan, Margarita Zuley, Jules Sumkin, Shandong Wu

arxiv logopreprintMay 27 2025
Predicting the risk of developing breast cancer is an important clinical tool to guide early intervention and tailoring personalized screening strategies. Early risk models have limited performance and recently machine learning-based analysis of mammogram images showed encouraging risk prediction effects. These models however are limited to the use of a single exam or tend to overlook nuanced breast tissue evolvement in spatial and temporal details of longitudinal imaging exams that are indicative of breast cancer risk. In this paper, we propose STA-Risk (Spatial and Temporal Asymmetry-based Risk Prediction), a novel Transformer-based model that captures fine-grained mammographic imaging evolution simultaneously from bilateral and longitudinal asymmetries for breast cancer risk prediction. STA-Risk is innovative by the side encoding and temporal encoding to learn spatial-temporal asymmetries, regulated by a customized asymmetry loss. We performed extensive experiments with two independent mammogram datasets and achieved superior performance than four representative SOTA models for 1- to 5-year future risk prediction. Source codes will be released upon publishing of the paper.

MedBridge: Bridging Foundation Vision-Language Models to Medical Image Diagnosis

Yitong Li, Morteza Ghahremani, Christian Wachinger

arxiv logopreprintMay 27 2025
Recent vision-language foundation models deliver state-of-the-art results on natural image classification but falter on medical images due to pronounced domain shifts. At the same time, training a medical foundation model requires substantial resources, including extensive annotated data and high computational capacity. To bridge this gap with minimal overhead, we introduce MedBridge, a lightweight multimodal adaptation framework that re-purposes pretrained VLMs for accurate medical image diagnosis. MedBridge comprises three key components. First, a Focal Sampling module that extracts high-resolution local regions to capture subtle pathological features and compensate for the limited input resolution of general-purpose VLMs. Second, a Query Encoder (QEncoder) injects a small set of learnable queries that attend to the frozen feature maps of VLM, aligning them with medical semantics without retraining the entire backbone. Third, a Mixture of Experts mechanism, driven by learnable queries, harnesses the complementary strength of diverse VLMs to maximize diagnostic performance. We evaluate MedBridge on five medical imaging benchmarks across three key adaptation tasks, demonstrating its superior performance in both cross-domain and in-domain adaptation settings, even under varying levels of training data availability. Notably, MedBridge achieved over 6-15% improvement in AUC compared to state-of-the-art VLM adaptation methods in multi-label thoracic disease diagnosis, underscoring its effectiveness in leveraging foundation models for accurate and data-efficient medical diagnosis. Our code is available at https://github.com/ai-med/MedBridge.

An orchestration learning framework for ultrasound imaging: Prompt-Guided Hyper-Perception and Attention-Matching Downstream Synchronization.

Lin Z, Li S, Wang S, Gao Z, Sun Y, Lam CT, Hu X, Yang X, Ni D, Tan T

pubmed logopapersMay 27 2025
Ultrasound imaging is pivotal in clinical diagnostics due to its affordability, portability, safety, real-time capability, and non-invasive nature. It is widely utilized for examining various organs, such as the breast, thyroid, ovary, cardiac, and more. However, the manual interpretation and annotation of ultrasound images are time-consuming and prone to variability among physicians. While single-task artificial intelligence (AI) solutions have been explored, they are not ideal for scaling AI applications in medical imaging. Foundation models, although a trending solution, often struggle with real-world medical datasets due to factors such as noise, variability, and the incapability of flexibly aligning prior knowledge with task adaptation. To address these limitations, we propose an orchestration learning framework named PerceptGuide for general-purpose ultrasound classification and segmentation. Our framework incorporates a novel orchestration mechanism based on prompted hyper-perception, which adapts to the diverse inductive biases required by different ultrasound datasets. Unlike self-supervised pre-trained models, which require extensive fine-tuning, our approach leverages supervised pre-training to directly capture task-relevant features, providing a stronger foundation for multi-task and multi-organ ultrasound imaging. To support this research, we compiled a large-scale Multi-task, Multi-organ public ultrasound dataset (M<sup>2</sup>-US), featuring images from 9 organs and 16 datasets, encompassing both classification and segmentation tasks. Our approach employs four specific prompts-Object, Task, Input, and Position-to guide the model, ensuring task-specific adaptability. Additionally, a downstream synchronization training stage is introduced to fine-tune the model for new data, significantly improving generalization capabilities and enabling real-world applications. Experimental results demonstrate the robustness and versatility of our framework in handling multi-task and multi-organ ultrasound image processing, outperforming both specialist models and existing general AI solutions. Compared to specialist models, our method improves segmentation from 82.26% to 86.45%, classification from 71.30% to 79.08%, while also significantly reducing model parameters.

Evaluating Large Language Models for Enhancing Radiology Specialty Examination: A Comparative Study with Human Performance.

Liu HY, Chen SJ, Wang W, Lee CH, Hsu HH, Shen SH, Chiou HJ, Lee WJ

pubmed logopapersMay 27 2025
The radiology specialty examination assesses clinical decision-making, image interpretation, and diagnostic reasoning. With the expansion of medical knowledge, traditional test design faces challenges in maintaining accuracy and relevance. Large language models (LLMs) demonstrate potential in medical education. This study evaluates LLM performance in radiology specialty exams, explores their role in assessing question difficulty, and investigates their reasoning processes, aiming to develop a more objective and efficient framework for exam design. This study compared the performance of LLMs and human examinees in a radiology specialty examination. Three LLMs (GPT-4o, o1-preview, and GPT-3.5-turbo-1106) were evaluated under zero-shot conditions. Exam accuracy, examinee accuracy, discrimination index, and point-biserial correlation were used to assess LLMs' ability to predict question difficulty and reasoning processes. The data provided by the Taiwan Radiological Society ensures comparability between AI and human performance. As for accuracy, GPT-4o (88.0%) and o1-preview (90.9%) outperformed human examinees (76.3%), whereas GPT-3.5-turbo-1106 showed significantly lower accuracy (50.2%). Question difficulty analysis revealed that newer LLMs excel in solving complex questions, while GPT-3.5-turbo-1106 exhibited greater performance variability. Discrimination index and point-biserial Correlation analyses demonstrated that GPT-4o and o1-preview accurately identified key differentiating questions, closely mirroring human reasoning patterns. These findings suggest that advanced LLMs can assess medical examination difficulty, offering potential applications in exam standardization and question evaluation. This study evaluated the problem-solving capabilities of GPT-3.5-turbo-1106, GPT-4o, and o1-preview in a radiology specialty examination. LLMs should be utilized as tools for assessing exam question difficulty and assisting in the standardized development of medical examinations.

Interpretable Machine Learning Models for Differentiating Glioblastoma From Solitary Brain Metastasis Using Radiomics.

Xia X, Wu W, Tan Q, Gou Q

pubmed logopapersMay 27 2025
To develop and validate interpretable machine learning models for differentiating glioblastoma (GB) from solitary brain metastasis (SBM) using radiomics features from contrast-enhanced T1-weighted MRI (CE-T1WI), and to compare the impact of low-order and high-order features on model performance. A cohort of 434 patients with histopathologically confirmed GB (226 patients) and SBM (208 patients) was retrospectively analyzed. Radiomic features were derived from CE-T1WI, with feature selection conducted through minimum redundancy maximum relevance and least absolute shrinkage and selection operator regression. Machine learning models, including GradientBoost and lightGBM (LGBM), were trained using low-order and high-order features. The performance of the models was assessed through receiver operating characteristic analysis and computation of the area under the curve, along with other indicators, including accuracy, specificity, and sensitivity. SHapley Additive Explanations (SHAP) analysis is used to measure the influence of each feature on the model's predictions. The performances of various machine learning models on both the training and validation datasets were notably different. For the training group, the LGBM, CatBoost, multilayer perceptron (MLP), and GradientBoost models achieved the highest AUC scores, all exceeding 0.9, demonstrating strong discriminative power. The LGBM model exhibited the best stability, with a minimal AUC difference of only 0.005 between the training and test sets, suggesting strong generalizability. Among the validation group results, the GradientBoost classifier achieved the maximum AUC of 0.927, closely followed by random forest at 0.925. GradientBoost also demonstrated high sensitivity (0.911) and negative predictive value (NPV, 0.889), effectively identifying true positives. The LGBM model showed the highest test accuracy (86.2%) and performed excellently in terms of sensitivity (0.911), NPV (0.895), and positive predictive value (PPV, 0.837). The models utilizing high-order features outperformed those based on low-order features in all the metrics. SHAP analysis further enhances model interpretability, providing insights into feature importance and contributions to classification decisions. Machine learning techniques based on radiomics can effectively distinguish GB from SBM, with gradient boosting tree-based models such as LGBMs demonstrating superior performance. High-order features significantly improve model accuracy and robustness. SHAP technology enhances the interpretability and transparency of models for distinguishing brain tumors, providing intuitive visualization of the contribution of radiomic features to classification.
Page 133 of 1521519 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.