Sort by:
Page 323 of 3433422 results

Prostate Cancer Screening with Artificial Intelligence-Enhanced Micro-Ultrasound: A Comparative Study with Traditional Methods

Muhammad Imran, Wayne G. Brisbane, Li-Ming Su, Jason P. Joseph, Wei Shao

arxiv logopreprintMay 27 2025
Background and objective: Micro-ultrasound (micro-US) is a novel imaging modality with diagnostic accuracy comparable to MRI for detecting clinically significant prostate cancer (csPCa). We investigated whether artificial intelligence (AI) interpretation of micro-US can outperform clinical screening methods using PSA and digital rectal examination (DRE). Methods: We retrospectively studied 145 men who underwent micro-US guided biopsy (79 with csPCa, 66 without). A self-supervised convolutional autoencoder was used to extract deep image features from 2D micro-US slices. Random forest classifiers were trained using five-fold cross-validation to predict csPCa at the slice level. Patients were classified as csPCa-positive if 88 or more consecutive slices were predicted positive. Model performance was compared with a classifier using PSA, DRE, prostate volume, and age. Key findings and limitations: The AI-based micro-US model and clinical screening model achieved AUROCs of 0.871 and 0.753, respectively. At a fixed threshold, the micro-US model achieved 92.5% sensitivity and 68.1% specificity, while the clinical model showed 96.2% sensitivity but only 27.3% specificity. Limitations include a retrospective single-center design and lack of external validation. Conclusions and clinical implications: AI-interpreted micro-US improves specificity while maintaining high sensitivity for csPCa detection. This method may reduce unnecessary biopsies and serve as a low-cost alternative to PSA-based screening. Patient summary: We developed an AI system to analyze prostate micro-ultrasound images. It outperformed PSA and DRE in detecting aggressive cancer and may help avoid unnecessary biopsies.

Machine learning decision support model construction for craniotomy approach of pineal region tumors based on MRI images.

Chen Z, Chen Y, Su Y, Jiang N, Wanggou S, Li X

pubmed logopapersMay 27 2025
Pineal region tumors (PRTs) are rare but deep-seated brain tumors, and complete surgical resection is crucial for effective tumor treatment. The choice of surgical approach is often challenging due to the low incidence and deep location. This study aims to combine machine learning and deep learning algorithms with pre-operative MRI images to build a model for PRTs surgical approaches recommendation, striving to model clinical experience for practical reference and education. This study was a retrospective study which enrolled a total of 173 patients diagnosed with PRTs radiologically from our hospital. Three traditional surgical approaches of were recorded for prediction label. Clinical and VASARI related radiological information were selected for machine learning prediction model construction. And MRI images from axial, sagittal and coronal views of orientation were also used for deep learning craniotomy approach prediction model establishment and evaluation. 5 machine learning methods were applied to construct the predictive classifiers with the clinical and VASARI features and all methods could achieve area under the ROC (Receiver operating characteristic) curve (AUC) values over than 0.7. And also, 3 deep learning algorithms (ResNet-50, EfficientNetV2-m and ViT) were applied based on MRI images from different orientations. EfficientNetV2-m achieved the highest AUC value of 0.89, demonstrating a significant high performance of prediction. And class activation mapping was used to reveal that the tumor itself and its surrounding relations are crucial areas for model decision-making. In our study, we used machine learning and deep learning to construct surgical approach recommendation models. Deep learning could achieve high performance of prediction and provide efficient and personalized decision support tools for PRTs surgical approach. Not applicable.

Automated Body Composition Analysis Using DAFS Express on 2D MRI Slices at L3 Vertebral Level.

Akella V, Bagherinasab R, Lee H, Li JM, Nguyen L, Salehin M, Chow VTY, Popuri K, Beg MF

pubmed logopapersMay 27 2025
Body composition analysis is vital in assessing health conditions such as obesity, sarcopenia, and metabolic syndromes. MRI provides detailed images of skeletal muscle (SM), visceral adipose tissue (VAT), and subcutaneous adipose tissue (SAT), but their manual segmentation is labor-intensive and limits clinical applicability. This study validates an automated tool for MRI-based 2D body composition analysis (Data Analysis Facilitation Suite (DAFS) Express), comparing its automated measurements with expert manual segmentations using UK Biobank data. A cohort of 399 participants from the UK Biobank dataset was selected, yielding 423 single L3 slices for analysis. DAFS Express performed automated segmentations of SM, VAT, and SAT, which were then manually corrected by expert raters for validation. Evaluation metrics included Jaccard coefficients, Dice scores, intraclass correlation coefficients (ICCs), and Bland-Altman Plots to assess segmentation agreement and reliability. High agreements were observed between automated and manual segmentations with mean Jaccard scores: SM 99.03%, VAT 95.25%, and SAT 99.57%, and mean Dice scores: SM 99.51%, VAT 97.41%, and SAT 99.78%. Cross-sectional area comparisons showed consistent measurements, with automated methods closely matching manual measurements for SM and SAT, and slightly higher values for VAT (SM: auto 132.51 cm<sup>2</sup>, manual 132.36 cm<sup>2</sup>; VAT: auto 137.07 cm<sup>2</sup>, manual 134.46 cm<sup>2</sup>; SAT: auto 203.39 cm<sup>2</sup>, manual 202.85 cm<sup>2</sup>). ICCs confirmed strong reliability (SM 0.998, VAT 0.994, SAT 0.994). Bland-Altman plots revealed minimal biases, and boxplots illustrated distribution similarities across SM, VAT, and SAT areas. On average, DAFS Express took 18 s per DICOM for a total of 126.9 min for 423 images to output segmentations and measurement PDF's per DICOM. Automated segmentation of SM, VAT, and SAT from 2D MRI images using DAFS Express showed comparable accuracy to manual segmentation. This underscores its potential to streamline image analysis processes in research and clinical settings, enhancing diagnostic accuracy and efficiency. Future work should focus on further validation across diverse clinical applications and imaging conditions.

Development of a No-Reference CT Image Quality Assessment Method Using RadImageNet Pre-trained Deep Learning Models.

Ohashi K, Nagatani Y, Yamazaki A, Yoshigoe M, Iwai K, Uemura R, Shimomura M, Tanimura K, Ishida T

pubmed logopapersMay 27 2025
Accurate assessment of computed tomography (CT) image quality is crucial for ensuring diagnostic accuracy, optimizing imaging protocols, and preventing excessive radiation exposure. In clinical settings, where high-quality reference images are often unavailable, developing no-reference image quality assessment (NR-IQA) methods is essential. Recently, CT-NR-IQA methods using deep learning have been widely studied; however, significant challenges remain in handling multiple degradation factors and accurately reflecting real-world degradations. To address these issues, we propose a novel CT-NR-IQA method. Our approach utilizes a dataset that combines two degradation factors (noise and blur) to train convolutional neural network (CNN) models capable of handling multiple degradation factors. Additionally, we leveraged RadImageNet pre-trained models (ResNet50, DenseNet121, InceptionV3, and InceptionResNetV2), allowing the models to learn deep features from large-scale real clinical images, thus enhancing adaptability to real-world degradations without relying on artificially degraded images. The models' performances were evaluated by measuring the correlation between the subjective scores and predicted image quality scores for both artificially degraded and real clinical image datasets. The results demonstrated positive correlations between the subjective and predicted scores for both datasets. In particular, ResNet50 showed the best performance, with a correlation coefficient of 0.910 for the artificially degraded images and 0.831 for the real clinical images. These findings indicate that the proposed method could serve as a potential surrogate for subjective assessment in CT-NR-IQA.

Deep learning-based CAD system for Alzheimer's diagnosis using deep downsized KPLS.

Neffati S, Mekki K, Machhout M

pubmed logopapersMay 27 2025
Alzheimer's disease (AD) is the most prevalent type of dementia. It is linked with a gradual decline in various brain functions, such as memory. Many research efforts are now directed toward non-invasive procedures for early diagnosis because early detection greatly benefits the patient care and treatment outcome. Additional to an accurate diagnosis and reduction of the rate of misdiagnosis; Computer-Aided Design (CAD) systems are built to give definitive diagnosis. This paper presents a novel CAD system to determine stages of AD. Initially, deep learning techniques are utilized to extract features from the AD brain MRIs. Then, the extracted features are reduced using a proposed feature reduction technique named Deep Downsized Kernel Partial Least Squares (DDKPLS). The proposed approach selects a reduced number of samples from the initial information matrix. The samples chosen give rise to a new data matrix further processed by KPLS to deal with the high dimensionality. The reduced feature space is finally classified using ELM. The implementation is named DDKPLS-ELM. Reference tests have been performed on the Kaggle MRI dataset, which exhibit the efficacy of the DDKPLS-based classifier; it achieves accuracy up to 95.4% and an F1 score of 95.1%.

Machine learning-driven imaging data for early prediction of lung toxicity in breast cancer radiotherapy.

Ungvári T, Szabó D, Győrfi A, Dankovics Z, Kiss B, Olajos J, Tőkési K

pubmed logopapersMay 27 2025
One possible adverse effect of breast irradiation is the development of pulmonary fibrosis. The aim of this study was to determine whether planning CT scans can predict which patients are more likely to develop lung lesions after treatment. A retrospective analysis of 242 patient records was performed using different machine learning models. These models showed a remarkable correlation between the occurrence of fibrosis and the hounsfield units of lungs in CT data. Three different classification methods (Tree, Kernel-based, k-Nearest Neighbors) showed predictive values above 60%. The human predictive factor (HPF), a mathematical predictive model, further strengthened the association between lung hounsfield unit (HU) metrics and radiation-induced lung injury (RILI). These approaches optimize radiation treatment plans to preserve lung health. Machine learning models and HPF can also provide effective diagnostic and therapeutic support for other diseases.

China Protocol for early screening, precise diagnosis, and individualized treatment of lung cancer.

Wang C, Chen B, Liang S, Shao J, Li J, Yang L, Ren P, Wang Z, Luo W, Zhang L, Liu D, Li W

pubmed logopapersMay 27 2025
Early screening, diagnosis, and treatment of lung cancer are pivotal in clinical practice since the tumor stage remains the most dominant factor that affects patient survival. Previous initiatives have tried to develop new tools for decision-making of lung cancer. In this study, we proposed the China Protocol, a complete workflow of lung cancer tailored to the Chinese population, which is implemented by steps including early screening by evaluation of risk factors and three-dimensional thin-layer image reconstruction technique for low-dose computed tomography (Tre-LDCT), accurate diagnosis via artificial intelligence (AI) and novel biomarkers, and individualized treatment through non-invasive molecule visualization strategies. The application of this protocol has improved the early diagnosis and 5-year survival rates of lung cancer in China. The proportion of early-stage (stage I) lung cancer has increased from 46.3% to 65.6%, along with a 5-year survival rate of 90.4%. Moreover, especially for stage IA1 lung cancer, the diagnosis rate has improved from 16% to 27.9%; meanwhile, the 5-year survival rate of this group achieved 97.5%. Thus, here we defined stage IA1 lung cancer, which cohort benefits significantly from early diagnosis and treatment, as the "ultra-early stage lung cancer", aiming to provide an intuitive description for more precise management and survival improvement. In the future, we will promote our findings to multicenter remote areas through medical alliances and mobile health services with the desire to move forward the diagnosis and treatment of lung cancer.

Dose calculation in nuclear medicine with magnetic resonance imaging images using Monte Carlo method.

Vu LH, Thao NTP, Trung NT, Hau PVT, Hong Loan TT

pubmed logopapersMay 27 2025
In recent years, scientists have been trying to convert magnetic resonance imaging (MRI) images into computed tomography (CT) images for dose calculations while taking advantage of the benefits of MRI images. The main approaches for image conversion are bulk density, Atlas registration, and machine learning. These methods have limitations in accuracy and time consumption and require large datasets to convert images. In this study, the novel 'voxels spawn voxels' technique combined with the 'orthonormalize' feature in Carimas software was developed to build a conversion dataset from MRI intensity to Hounsfield unit value for some structural regions including gluteus maximus, liver, kidneys, spleen, pancreas, and colon. The original CT images and the converted MRI images were imported into the Geant4/Gamos software for dose calculation. It gives good results (<5%) in most organs except the intestine (18%).

STA-Risk: A Deep Dive of Spatio-Temporal Asymmetries for Breast Cancer Risk Prediction

Zhengbo Zhou, Dooman Arefan, Margarita Zuley, Jules Sumkin, Shandong Wu

arxiv logopreprintMay 27 2025
Predicting the risk of developing breast cancer is an important clinical tool to guide early intervention and tailoring personalized screening strategies. Early risk models have limited performance and recently machine learning-based analysis of mammogram images showed encouraging risk prediction effects. These models however are limited to the use of a single exam or tend to overlook nuanced breast tissue evolvement in spatial and temporal details of longitudinal imaging exams that are indicative of breast cancer risk. In this paper, we propose STA-Risk (Spatial and Temporal Asymmetry-based Risk Prediction), a novel Transformer-based model that captures fine-grained mammographic imaging evolution simultaneously from bilateral and longitudinal asymmetries for breast cancer risk prediction. STA-Risk is innovative by the side encoding and temporal encoding to learn spatial-temporal asymmetries, regulated by a customized asymmetry loss. We performed extensive experiments with two independent mammogram datasets and achieved superior performance than four representative SOTA models for 1- to 5-year future risk prediction. Source codes will be released upon publishing of the paper.

MedBridge: Bridging Foundation Vision-Language Models to Medical Image Diagnosis

Yitong Li, Morteza Ghahremani, Christian Wachinger

arxiv logopreprintMay 27 2025
Recent vision-language foundation models deliver state-of-the-art results on natural image classification but falter on medical images due to pronounced domain shifts. At the same time, training a medical foundation model requires substantial resources, including extensive annotated data and high computational capacity. To bridge this gap with minimal overhead, we introduce MedBridge, a lightweight multimodal adaptation framework that re-purposes pretrained VLMs for accurate medical image diagnosis. MedBridge comprises three key components. First, a Focal Sampling module that extracts high-resolution local regions to capture subtle pathological features and compensate for the limited input resolution of general-purpose VLMs. Second, a Query Encoder (QEncoder) injects a small set of learnable queries that attend to the frozen feature maps of VLM, aligning them with medical semantics without retraining the entire backbone. Third, a Mixture of Experts mechanism, driven by learnable queries, harnesses the complementary strength of diverse VLMs to maximize diagnostic performance. We evaluate MedBridge on five medical imaging benchmarks across three key adaptation tasks, demonstrating its superior performance in both cross-domain and in-domain adaptation settings, even under varying levels of training data availability. Notably, MedBridge achieved over 6-15% improvement in AUC compared to state-of-the-art VLM adaptation methods in multi-label thoracic disease diagnosis, underscoring its effectiveness in leveraging foundation models for accurate and data-efficient medical diagnosis. Our code is available at https://github.com/ai-med/MedBridge.
Page 323 of 3433422 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.