Sort by:
Page 176 of 3973969 results

Prediction of Motor Symptom Progression of Parkinson's Disease Through Multimodal Imaging-Based Machine Learning.

Dai Y, Imami M, Hu R, Zhang C, Zhao L, Kargilis DC, Zhang H, Yu G, Liao WH, Jiao Z, Zhu C, Yang L, Bai HX

pubmed logopapersJul 7 2025
The unrelenting progression of Parkinson's disease (PD) leads to severely impaired quality of life, with considerable variability in progression rates among patients. Identifying biomarkers of PD progression could improve clinical monitoring and management. Radiomics, which facilitates data extraction from imaging for use in machine learning models, offers a promising approach to this challenge. This study investigated the use of multi-modality imaging, combining conventional magnetic resonance imaging (MRI) and dopamine transporter single photon emission computed tomography (DAT-SPECT), to predict motor progression in PD. Motor progression was measured by changes in the Movement Disorder Society Unified Parkinson's Disease Rating Scale (MDS-UPDRS) motor subscale scores. Radiomic features were selected from the midbrain region in MRI and caudate nucleus, putamen, and ventral striatum in DAT-SPECT. Patients were stratified into fast progression vs. slow progression based on change in MDS-UPDRS in follow-up. Various feature selection methods and machine learning classifiers were evaluated for each modality, and the best-performing models were combined into an ensemble. On the internal test set, the ensemble model, which integrated clinical information, T1WI, T2WI and DAT-SPECT achieved a ROC AUC of 0.93 (95% CI: 0.80-1.00), PR AUC of 0.88 (95%CI 0.61-1.00), accuracy of 0.85 (95% CI: 0.70-0.89), sensitivity of 0.72 (95% CI: 0.43-1.00), and specificity of 0.92 (95% CI: 0.77-1.00). On the external test set, the ensemble model outperformed single-modality models with a ROC AUC of 0.77 (95% CI: 0.53-0.93), PR AUC of 0.79 (95% CI: 0.56-0.95), accuracy of 0.68 (95% CI: 0.50-0.86), sensitivity of 0.53 (95% CI: 0.27-0.82), and specificity of 0.82 (95% CI: 0.55-1.00). In conclusion, this study developed an imaging-based model to identify baseline characteristics predictive of disease progression in PD patients. The findings highlight the strength of using multiple imaging modalities and integrating imaging data with clinical information to enhance the prediction of motor progression in PD.

An enhanced fusion of transfer learning models with optimization based clinical diagnosis of lung and colon cancer using biomedical imaging.

Vinoth NAS, Kalaivani J, Arieth RM, Sivasakthiselvan S, Park GC, Joshi GP, Cho W

pubmed logopapersJul 7 2025
Lung and colon cancers (LCC) are among the foremost reasons for human death and disease. Early analysis of this disorder contains various tests, namely ultrasound (US), magnetic resonance imaging (MRI), and computed tomography (CT). Despite analytical imaging, histopathology is one of the effective methods that delivers cell-level imaging of tissue under inspection. These are mainly due to a restricted number of patients receiving final analysis and early healing. Furthermore, there are probabilities of inter-observer faults. Clinical informatics is an interdisciplinary field that integrates healthcare, information technology, and data analytics to improve patient care, clinical decision-making, and medical research. Recently, deep learning (DL) proved to be effective in the medical sector, and cancer diagnosis can be made automatically by utilizing the capabilities of artificial intelligence (AI), enabling faster analysis of more cases cost-effectively. On the other hand, with extensive technical developments, DL has arisen as an effective device in medical settings, mainly in medical imaging. This study presents an Enhanced Fusion of Transfer Learning Models and Optimization-Based Clinical Biomedical Imaging for Accurate Lung and Colon Cancer Diagnosis (FTLMO-BILCCD) model. The main objective of the FTLMO-BILCCD technique is to develop an efficient method for LCC detection using clinical biomedical imaging. Initially, the image pre-processing stage applies the median filter (MF) model to eliminate the unwanted noise from the input image data. Furthermore, fusion models such as CapsNet, EffcientNetV2, and MobileNet-V3 Large are employed for the feature extraction. The FTLMO-BILCCD technique implements a hybrid of temporal pattern attention and bidirectional gated recurrent unit (TPA-BiGRU) for classification. Finally, the beluga whale optimization (BWO) technique alters the hyperparameter range of the TPA-BiGRU model optimally and results in greater classification performance. The FTLMO-BILCCD approach is experimented with under the LCC-HI dataset. The performance validation of the FTLMO-BILCCD approach portrayed a superior accuracy value of 99.16% over existing models.

Deep Learning based Collateral Scoring on Multi-Phase CTA in patients with acute ischemic stroke in MCA region.

Liu H, Zhang J, Chen S, Ganesh A, Xu Y, Hu B, Menon BK, Qiu W

pubmed logopapersJul 7 2025
Collateral circulation is a critical determinant of clinical outcomes in acute ischemic stroke (AIS) patients and plays a key role in patient selection for endovascular therapy. This study aimed to develop an automated method for assessing and quantifying collateral circulation on multi-phase CT angiography, aiming to reduce observer variability and improve diagnostic efficiency. This retrospective study included mCTA images from 420 AIS patients within 14 hours of stroke symptom onset. A deep learning-based classification method with a tailored preprocessing module was developed to assess collateral circulation status. Manual evaluations using the simplified Menon method served as the ground truth. Model performance was assessed through five-fold cross-validation using metrics including accuracy, F1 score, precision, sensitivity, specificity, and the area under the receiver operating characteristic curve. The median age of the 420 patients was 73 years (IQR: 64-80 years; 222 men), and the median time from symptom onset to mCTA acquisition was 123 minutes (IQR: 79-245.5 minutes). The proposed framework achieved an accuracy of 87.6% for three-class collateral scores (good, intermediate, poor), with F1 score (85.7%), precision (83.8%), sensitivity (89.3%), specificity (92.9%), AUC (93.7%), ICC (0.832), and Kappa (0.781). For two-class collateral scores, we obtained 94.0% accuracy for good vs. non-good scores (F1 score(94.4%), precision (95.9%), sensitivity (93.0%), specificity (94.1%), AUC (97.1%),ICC(0.882),kappa(0.881)) and 97.1% for poor vs. non-poor scores (F1 score (98.5%), precision (98.0%), sensitivity (99.0%), specificity (84.8%), AUC (95.6%), ICC(0.740), kappa(0.738)). Additional analyses demonstrated that multi-phase CTA showed improved performance over single or two-phase CTA in collateral assessment. The proposed deep learning framework demonstrated high accuracy and consistency with radiologist-assigned scores for evaluating collateral circulation on multi-phase CTA in AIS patients. This method may offer a useful tool to aid clinical decision-making, reducing variability and improving diagnostic workflow. AIS = Acute Ischemic Stroke; mCTA = multi-phase Computed Tomography Angiography; DL = deep learning; AUC = area under the receiver operating characteristic curve; IQR = interquartile range; ROC = receiver operating characteristic.

AG-MS3D-CNN multiscale attention guided 3D convolutional neural network for robust brain tumor segmentation across MRI protocols.

Lilhore UK, Sunder R, Simaiya S, Alsafyani M, Monish Khan MD, Alroobaea R, Alsufyani H, Baqasah AM

pubmed logopapersJul 7 2025
Accurate segmentation of brain tumors from multimodal Magnetic Resonance Imaging (MRI) plays a critical role in diagnosis, treatment planning, and disease monitoring in neuro-oncology. Traditional methods of tumor segmentation, often manual and labour-intensive, are prone to inconsistencies and inter-observer variability. Recently, deep learning models, particularly Convolutional Neural Networks (CNNs), have shown great promise in automating this process. However, these models face challenges in terms of generalization across diverse datasets, accurate tumor boundary delineation, and uncertainty estimation. To address these challenges, we propose AG-MS3D-CNN, an attention-guided multiscale 3D convolutional neural network for brain tumor segmentation. Our model integrates local and global contextual information through multiscale feature extraction and leverages spatial attention mechanisms to enhance boundary delineation, particularly in complex tumor regions. We also introduce Monte Carlo dropout for uncertainty estimation, providing clinicians with confidence scores for each segmentation, which is crucial for informed decision-making. Furthermore, we adopt a multitask learning framework, which enables the simultaneous segmentation, classification, and volume estimation of tumors. To ensure robustness and generalizability across diverse MRI acquisition protocols and scanners, we integrate a domain adaptation module into the network. Extensive evaluations on the BraTS 2021 dataset and additional external datasets, such as OASIS, ADNI, and IXI, demonstrate the superior performance of AG-MS3D-CNN compared to existing state-of-the-art methods. Our model achieves high Dice scores and shows excellent robustness, making it a valuable tool for clinical decision support in neuro-oncology.

Development and retrospective validation of an artificial intelligence system for diagnostic assessment of prostate biopsies: study protocol.

Mulliqi N, Blilie A, Ji X, Szolnoky K, Olsson H, Titus M, Martinez Gonzalez G, Boman SE, Valkonen M, Gudlaugsson E, Kjosavik SR, Asenjo J, Gambacorta M, Libretti P, Braun M, Kordek R, Łowicki R, Hotakainen K, Väre P, Pedersen BG, Sørensen KD, Ulhøi BP, Rantalainen M, Ruusuvuori P, Delahunt B, Samaratunga H, Tsuzuki T, Janssen EAM, Egevad L, Kartasalo K, Eklund M

pubmed logopapersJul 7 2025
Histopathological evaluation of prostate biopsies using the Gleason scoring system is critical for prostate cancer diagnosis and treatment selection. However, grading variability among pathologists can lead to inconsistent assessments, risking inappropriate treatment. Similar challenges complicate the assessment of other prognostic features like cribriform cancer morphology and perineural invasion. Many pathology departments are also facing an increasingly unsustainable workload due to rising prostate cancer incidence and a decreasing pathologist workforce coinciding with increasing requirements for more complex assessments and reporting. Digital pathology and artificial intelligence (AI) algorithms for analysing whole slide images show promise in improving the accuracy and efficiency of histopathological assessments. Studies have demonstrated AI's capability to diagnose and grade prostate cancer comparably to expert pathologists. However, external validations on diverse data sets have been limited and often show reduced performance. Historically, there have been no well-established guidelines for AI study designs and validation methods. Diagnostic assessments of AI systems often lack preregistered protocols and rigorous external cohort sampling, essential for reliable evidence of their safety and accuracy. This study protocol covers the retrospective validation of an AI system for prostate biopsy assessment. The primary objective of the study is to develop a high-performing and robust AI model for diagnosis and Gleason scoring of prostate cancer in core needle biopsies, and at scale evaluate whether it can generalise to fully external data from independent patients, pathology laboratories and digitalisation platforms. The secondary objectives cover AI performance in estimating cancer extent and detecting cribriform prostate cancer and perineural invasion. This protocol outlines the steps for data collection, predefined partitioning of data cohorts for AI model training and validation, model development and predetermined statistical analyses, ensuring systematic development and comprehensive validation of the system. The protocol adheres to Transparent Reporting of a multivariable prediction model of Individual Prognosis Or Diagnosis+AI (TRIPOD+AI), Protocol Items for External Cohort Evaluation of a Deep Learning System in Cancer Diagnostics (PIECES), Checklist for AI in Medical Imaging (CLAIM) and other relevant best practices. Data collection and usage were approved by the respective ethical review boards of each participating clinical laboratory, and centralised anonymised data handling was approved by the Swedish Ethical Review Authority. The study will be conducted in agreement with the Helsinki Declaration. The findings will be disseminated in peer-reviewed publications (open access).

Development and validation of an improved volumetric breast density estimation model using the ResNet technique.

Asai Y, Yamamuro M, Yamada T, Kimura Y, Ishii K, Nakamura Y, Otsuka Y, Kondo Y

pubmed logopapersJul 7 2025

Temporal changes in volumetric breast density (VBD) may serve as prognostic biomarkers for predicting the risk of future breast cancer development. However, accurately measuring VBD from archived X-ray mammograms remains challenging. In a previous study, we proposed a method to estimate volumetric breast density using imaging parameters (tube voltage, tube current, and exposure time) and patient age. This approach, based on a multiple regression model, achieved a determination coefficient (R²) of 0.868. 
Approach:
In this study, we developed and applied machine learning models-Random Forest, XG-Boost-and the deep learning model Residual Network (ResNet) to the same dataset. Model performance was assessed using several metrics: determination coefficient, correlation coefficient, root mean square error, mean absolute error, root mean square percentage error, and mean absolute percentage error. A five-fold cross-validation was conducted to ensure robust validation. 
Main results:
The best-performing fold resulted in R² values of 0.895, 0.907, and 0.918 for Random Forest, XG-Boost, and ResNet, respectively, all surpassing the previous study's results. ResNet consistently achieved the lowest error values across all metrics. 
Significance:
These findings suggest that ResNet successfully achieved the task of accurately determining VBD from past mammography-a task that has not been realised to date. We are confident that this achievement contributes to advancing research aimed at predicting future risks of breast cancer development by enabling high-accuracy time-series analyses of retrospective VBD.&#xD.

Self-supervised Deep Learning for Denoising in Ultrasound Microvascular Imaging

Lijie Huang, Jingyi Yin, Jingke Zhang, U-Wai Lok, Ryan M. DeRuiter, Jieyang Jin, Kate M. Knoll, Kendra E. Petersen, James D. Krier, Xiang-yang Zhu, Gina K. Hesley, Kathryn A. Robinson, Andrew J. Bentall, Thomas D. Atwell, Andrew D. Rule, Lilach O. Lerman, Shigao Chen, Chengwu Huang

arxiv logopreprintJul 7 2025
Ultrasound microvascular imaging (UMI) is often hindered by low signal-to-noise ratio (SNR), especially in contrast-free or deep tissue scenarios, which impairs subsequent vascular quantification and reliable disease diagnosis. To address this challenge, we propose Half-Angle-to-Half-Angle (HA2HA), a self-supervised denoising framework specifically designed for UMI. HA2HA constructs training pairs from complementary angular subsets of beamformed radio-frequency (RF) blood flow data, across which vascular signals remain consistent while noise varies. HA2HA was trained using in-vivo contrast-free pig kidney data and validated across diverse datasets, including contrast-free and contrast-enhanced data from pig kidneys, as well as human liver and kidney. An improvement exceeding 15 dB in both contrast-to-noise ratio (CNR) and SNR was observed, indicating a substantial enhancement in image quality. In addition to power Doppler imaging, denoising directly in the RF domain is also beneficial for other downstream processing such as color Doppler imaging (CDI). CDI results of human liver derived from the HA2HA-denoised signals exhibited improved microvascular flow visualization, with a suppressed noisy background. HA2HA offers a label-free, generalizable, and clinically applicable solution for robust vascular imaging in both contrast-free and contrast-enhanced UMI.

MedGemma Technical Report

Andrew Sellergren, Sahar Kazemzadeh, Tiam Jaroensri, Atilla Kiraly, Madeleine Traverse, Timo Kohlberger, Shawn Xu, Fayaz Jamil, Cían Hughes, Charles Lau, Justin Chen, Fereshteh Mahvar, Liron Yatziv, Tiffany Chen, Bram Sterling, Stefanie Anna Baby, Susanna Maria Baby, Jeremy Lai, Samuel Schmidgall, Lu Yang, Kejia Chen, Per Bjornsson, Shashir Reddy, Ryan Brush, Kenneth Philbrick, Howard Hu, Howard Yang, Richa Tiwari, Sunny Jansen, Preeti Singh, Yun Liu, Shekoofeh Azizi, Aishwarya Kamath, Johan Ferret, Shreya Pathak, Nino Vieillard, Ramona Merhej, Sarah Perrin, Tatiana Matejovicova, Alexandre Ramé, Morgane Riviere, Louis Rouillard, Thomas Mesnard, Geoffrey Cideron, Jean-bastien Grill, Sabela Ramos, Edouard Yvinec, Michelle Casbon, Elena Buchatskaya, Jean-Baptiste Alayrac, Dmitry Lepikhin, Vlad Feinberg, Sebastian Borgeaud, Alek Andreev, Cassidy Hardin, Robert Dadashi, Léonard Hussenot, Armand Joulin, Olivier Bachem, Yossi Matias, Katherine Chou, Avinatan Hassidim, Kavi Goel, Clement Farabet, Joelle Barral, Tris Warkentin, Jonathon Shlens, David Fleet, Victor Cotruta, Omar Sanseviero, Gus Martins, Phoebe Kirk, Anand Rao, Shravya Shetty, David F. Steiner, Can Kirmizibayrak, Rory Pilgrim, Daniel Golden, Lin Yang

arxiv logopreprintJul 7 2025
Artificial intelligence (AI) has significant potential in healthcare applications, but its training and deployment faces challenges due to healthcare's diverse data, complex tasks, and the need to preserve privacy. Foundation models that perform well on medical tasks and require less task-specific tuning data are critical to accelerate the development of healthcare AI applications. We introduce MedGemma, a collection of medical vision-language foundation models based on Gemma 3 4B and 27B. MedGemma demonstrates advanced medical understanding and reasoning on images and text, significantly exceeding the performance of similar-sized generative models and approaching the performance of task-specific models, while maintaining the general capabilities of the Gemma 3 base models. For out-of-distribution tasks, MedGemma achieves 2.6-10% improvement on medical multimodal question answering, 15.5-18.1% improvement on chest X-ray finding classification, and 10.8% improvement on agentic evaluations compared to the base models. Fine-tuning MedGemma further improves performance in subdomains, reducing errors in electronic health record information retrieval by 50% and reaching comparable performance to existing specialized state-of-the-art methods for pneumothorax classification and histopathology patch classification. We additionally introduce MedSigLIP, a medically-tuned vision encoder derived from SigLIP. MedSigLIP powers the visual understanding capabilities of MedGemma and as an encoder achieves comparable or better performance than specialized medical image encoders. Taken together, the MedGemma collection provides a strong foundation of medical image and text capabilities, with potential to significantly accelerate medical research and development of downstream applications. The MedGemma collection, including tutorials and model weights, can be found at https://goo.gle/medgemma.

Deep Learning Model Based on Dual-energy CT for Assessing Cervical Lymph Node Metastasis in Oral Squamous Cell Carcinoma.

Qi YM, Zhang LJ, Wang Y, Duan XH, Li YJ, Xiao EH, Luo YH

pubmed logopapersJul 7 2025
Accurate detection of lymph node metastasis (LNM) in oral squamous cell carcinoma (OSCC) is crucial for treatment planning. This study developed a deep learning model using dual-energy CT to improve LNM detection. Preoperative dual-energy CT images (Iodine Map, Fat Map, monoenergetic 70 keV, and RHO/Z Map) and clinical data were collected from two centers. From the first center, 248 patients were divided into training (n=198) and internal validation (n=50) cohorts (8:2 ratio), while 106 patients from the second center comprised the external validation cohort. Region-of-interest images from all four sequences were stacked along the channel dimension to generate fused four-channel composite images. 16 deep learning models were developed as follows: three architectures (Crossformer, Densenet169, Squeezenet1_0) applied to each single-sequence/fused image, followed by MLP integration. Additionally, a Crossformer_Transformer model was constructed based on fused image. The top-performing model was compared against radiologists' assessments. Among the 16 deep learning models trained in this study, the Crossformer_Transformer model demonstrated the best diagnostic performance in predicting LNM in OSCC patients, with an AUC of 0.960 (95% CI: 0.9355-0.9842) on the training dataset, and 0.881 (95% CI: 0.7396-1.0000) and 0.881 (95% CI: 0.8033-0.9590) on the internal and external validation sets, respectively. The average AUC for radiologists across both validation cohorts (0.723-0.819) was lower than that of the model. The Crossformer_Transformer model, validated on multicenter data, shows strong potential for improving preoperative risk assessment and clinical decision-making in cervical LNM for OSCC patients.

Efficient Ultrasound Breast Cancer Detection with DMFormer: A Dynamic Multiscale Fusion Transformer.

Guo L, Zhang H, Ma C

pubmed logopapersJul 7 2025
To develop an advanced deep learning model for accurate differentiation between benign and malignant masses in ultrasound breast cancer screening, addressing the challenges of noise, blur, and complex tissue structures in ultrasound imaging. We propose Dynamic Multiscale Fusion Transformer (DMFormer), a novel Transformer-based architecture featuring a dynamic multiscale feature fusion mechanism. The model integrates window attention for local feature interaction with grid attention for global context mixing, enabling comprehensive capture of both fine-grained tissue details and broader anatomical contexts. DMFormer was evaluated on two independent datasets and compared against state-of-the-art approaches, including convolutional neural networks, Transformer-based architectures, and hybrid models. The model achieved areas under the curve of 90.48% and 86.57% on the respective datasets, consistently outperforming all comparison models. DMFormer demonstrates superior performance in ultrasound breast cancer detection through its innovative dual-attention approach. The model's ability to effectively balance local and global feature processing while maintaining computational efficiency represents a significant advancement in medical image analysis. These results validate DMFormer's potential for enhancing the accuracy and reliability of breast cancer screening in clinical settings.
Page 176 of 3973969 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.