Sort by:
Page 193 of 3993982 results

Hierarchical Characterization of Brain Dynamics via State Space-based Vector Quantization

Yanwu Yang, Thomas Wolfers

arxiv logopreprintJun 28 2025
Understanding brain dynamics through functional Magnetic Resonance Imaging (fMRI) remains a fundamental challenge in neuroscience, particularly in capturing how the brain transitions between various functional states. Recently, metastability, which refers to temporarily stable brain states, has offered a promising paradigm to quantify complex brain signals into interpretable, discretized representations. In particular, compared to cluster-based machine learning approaches, tokenization approaches leveraging vector quantization have shown promise in representation learning with powerful reconstruction and predictive capabilities. However, most existing methods ignore brain transition dependencies and lack a quantification of brain dynamics into representative and stable embeddings. In this study, we propose a Hierarchical State space-based Tokenization network, termed HST, which quantizes brain states and transitions in a hierarchical structure based on a state space-based model. We introduce a refined clustered Vector-Quantization Variational AutoEncoder (VQ-VAE) that incorporates quantization error feedback and clustering to improve quantization performance while facilitating metastability with representative and stable token representations. We validate our HST on two public fMRI datasets, demonstrating its effectiveness in quantifying the hierarchical dynamics of the brain and its potential in disease diagnosis and reconstruction performance. Our method offers a promising framework for the characterization of brain dynamics, facilitating the analysis of metastability.

Inpainting is All You Need: A Diffusion-based Augmentation Method for Semi-supervised Medical Image Segmentation

Xinrong Hu, Yiyu Shi

arxiv logopreprintJun 28 2025
Collecting pixel-level labels for medical datasets can be a laborious and expensive process, and enhancing segmentation performance with a scarcity of labeled data is a crucial challenge. This work introduces AugPaint, a data augmentation framework that utilizes inpainting to generate image-label pairs from limited labeled data. AugPaint leverages latent diffusion models, known for their ability to generate high-quality in-domain images with low overhead, and adapts the sampling process for the inpainting task without need for retraining. Specifically, given a pair of image and label mask, we crop the area labeled with the foreground and condition on it during reversed denoising process for every noise level. Masked background area would gradually be filled in, and all generated images are paired with the label mask. This approach ensures the accuracy of match between synthetic images and label masks, setting it apart from existing dataset generation methods. The generated images serve as valuable supervision for training downstream segmentation models, effectively addressing the challenge of limited annotations. We conducted extensive evaluations of our data augmentation method on four public medical image segmentation datasets, including CT, MRI, and skin imaging. Results across all datasets demonstrate that AugPaint outperforms state-of-the-art label-efficient methodologies, significantly improving segmentation performance.

Deep Learning-Based Automated Detection of the Middle Cerebral Artery in Transcranial Doppler Ultrasound Examinations.

Lee H, Shi W, Mukaddim RA, Brunelle E, Palisetti A, Imaduddin SM, Rajendram P, Incontri D, Lioutas VA, Heldt T, Raju BI

pubmed logopapersJun 28 2025
Transcranial Doppler (TCD) ultrasound has significant clinical value for assessing cerebral hemodynamics, but its reliance on operator expertise limits broader clinical adoption. In this work, we present a lightweight real-time deep learning-based approach capable of automatically identifying the middle cerebral artery (MCA) in TCD Color Doppler images. Two state-of-the-art object detection models, YOLOv10 and Real-Time Detection Transformers (RT-DETR), were investigated for automated MCA detection in real-time. TCD Color Doppler data (41 subjects; 365 videos; 61,611 frames) were collected from neurologically healthy individuals (n = 31) and stroke patients (n = 10). MCA bounding box annotations were performed by clinical experts on all frames. Model training consisted of pretraining utilizing a large abdominal ultrasound dataset followed by subsequent fine-tuning on acquired TCD data. Detection performance at the instance and frame levels, and inference speed were assessed through four-fold cross-validation. Inter-rater agreement between model and two human expert readers was assessed using distance between bounding boxes and inter-rater variability was quantified using the individual equivalence coefficient (IEC) metric. Both YOLOv10 and RT-DETR models showed comparable frame level accuracy for MCA presence, with F1 scores of 0.884 ± 0.023 and 0.884 ± 0.019 respectively. YOLOv10 outperformed RT-DETR for instance-level localization accuracy (AP: 0.817 vs. 0.780) and had considerably faster inference speed on a desktop CPU (11.6 ms vs. 91.14 ms). Furthermore, YOLOv10 showed an average inference time of 36 ms per frame on a tablet device. The IEC was -1.08 with 95 % confidence interval: [-1.45, -0.19], showing that the AI predictions deviated less from each reader than the readers' annotations deviated from each other. Real-time automated detection of the MCA is feasible and can be implemented on mobile platforms, potentially enabling wider clinical adoption by less-trained operators in point-of-care settings.

Radio DINO: A foundation model for advanced radiomics and AI-driven medical imaging analysis.

Zedda L, Loddo A, Di Ruberto C

pubmed logopapersJun 28 2025
Radiomics is transforming medical imaging by extracting complex features that enhance disease diagnosis, prognosis, and treatment evaluation. However, traditional approaches face significant challenges, such as the need for manual feature engineering, high dimensionality, and limited sample sizes. This paper presents Radio DINO, a novel family of deep learning foundation models that leverage self-supervised learning (SSL) techniques from DINO and DINOV2, pretrained on the RadImageNet dataset. The novelty of our approach lies in (1) developing Radio DINO to capture rich semantic embeddings, enabling robust feature extraction without manual intervention, (2) demonstrating superior performance across various clinical tasks on the MedMNISTv2 dataset, surpassing existing models, and (3) enhancing the interpretability of the model by providing visualizations that highlight its focus on clinically relevant image regions. Our results show that Radio DINO has the potential to democratize advanced radiomics tools, making them accessible to healthcare institutions with limited resources and ultimately improving diagnostic and prognostic outcomes in radiology.

Novel Artificial Intelligence-Driven Infant Meningitis Screening From High-Resolution Ultrasound Imaging.

Sial HA, Carandell F, Ajanovic S, Jiménez J, Quesada R, Santos F, Buck WC, Sidat M, Bassat Q, Jobst B, Petrone P

pubmed logopapersJun 28 2025
Infant meningitis can be a life-threatening disease and requires prompt and accurate diagnosis to prevent severe outcomes or death. Gold-standard diagnosis requires lumbar puncture (LP) to obtain and analyze cerebrospinal fluid (CSF). Despite being standard practice, LPs are invasive, pose risks for the patient and often yield negative results, either due to contamination with red blood cells from the puncture itself or because LPs are routinely performed to rule out a life-threatening infection, despite the disease's relatively low incidence. Furthermore, in low-income settings where incidence is the highest, LPs and CSF exams are rarely feasible, and suspected meningitis cases are generally treated empirically. There is a growing need for non-invasive, accurate diagnostic methods. We developed a three-stage deep learning framework using Neosonics ultrasound technology for 30 infants with suspected meningitis and a permeable fontanelle at three Spanish University Hospitals (from 2021 to 2023). In stage 1, 2194 images were processed for quality control using a vessel/non-vessel model, with a focus on vessel identification and manual removal of images exhibiting artifacts such as poor coupling and clutter. This refinement process resulted in a final cohort comprising 16 patients-6 cases (336 images) and 10 controls (445 images), yielding 781 images for the second stage. The second stage involved the use of a deep learning model to classify images based on a white blood cell count threshold (set at 30 cells/mm<sup>3</sup>) into control or meningitis categories. The third stage integrated explainable artificial intelligence (XAI) methods, such as Grad-CAM visualizations, alongside image statistical analysis, to provide transparency and interpretability of the model's decision-making process in our artificial intelligence-driven screening tool. Our approach achieved 96% accuracy in quality control and 93% precision and 92% accuracy in image-level meningitis detection, with an overall patient-level accuracy of 94%. It identified 6 meningitis cases and 10 controls with 100% sensitivity and 90% specificity, demonstrating only a single misclassification. The use of gradient-weighted class activation mapping-based XAI significantly enhanced diagnostic interpretability, and to further refine our insights we incorporated a statistics-based XAI approach. By analyzing image metrics such as entropy and standard deviation, we identified texture variations in the images attributable to the presence of cells, which improved the interpretability of our diagnostic tool. This study supports the efficacy of a multi-stage deep learning model for non-invasive screening of infant meningitis and its potential to guide the need for LPs. It also highlights the transformative potential of artificial intelligence in medical diagnostic screening for neonatal health care, paving the way for future research and innovations.

Comparative analysis of iterative vs AI-based reconstruction algorithms in CT imaging for total body assessment: Objective and subjective clinical analysis.

Tucciariello RM, Botte M, Calice G, Cammarota A, Cammarota F, Capasso M, Nardo GD, Lancellotti MI, Palmese VP, Sarno A, Villonio A, Bianculli A

pubmed logopapersJun 28 2025
This study evaluates the performance of Iterative and AI-based Reconstruction algorithms in CT imaging for brain, chest, and upper abdomen assessments. Using a 320-slice CT scanner, phantom images were analysed through quantitative metrics such as Noise, Contrast-to-Noise-Ratio and Target Transfer Function. Additionally, five radiologists performed subjective evaluations on real patient images by scoring clinical parameters related to anatomical structures across the three body sites. The study aimed to relate results obtained with the typical approach related to parameters involved in medical physics using a Catphan physical phantom, with the evaluations assigned by the radiologists to the clinical parameters chosen in this study, and to determine whether the physical approach alone can ensure the implementation of new procedures and the optimization in clinical practice. AI-based algorithms demonstrated superior performance in chest and abdominal imaging, enhancing parenchymal and vascular detail with notable reductions in noise. However, their performance in brain imaging was less effective, as the aggressive noise reduction led to excessive smoothing, which affected diagnostic interpretability. Iterative reconstruction methods provided balanced results for brain imaging, preserving structural details and maintaining diagnostic clarity. The findings emphasize the need for region-specific optimization of reconstruction protocols. While AI-based methods can complement traditional IR techniques, they should not be assumed to inherently improve outcomes. A critical and cautious introduction of AI-based techniques is essential, ensuring radiologists adapt effectively without compromising diagnostic accuracy.

Emerging Artificial Intelligence Innovations in Rheumatoid Arthritis and Challenges to Clinical Adoption.

Gilvaz VJ, Sudheer A, Reginato AM

pubmed logopapersJun 28 2025
This review was written to inform practicing clinical rheumatologists about recent advances in artificial intelligence (AI) based research in rheumatoid arthritis (RA), using accessible and practical language. We highlight developments from 2023 to early 2025 across diagnostic imaging, treatment prediction, drug discovery, and patient-facing tools. Given the increasing clinical interest in AI and its potential to augment care delivery, this article aims to bridge the gap between technical innovation and real-world rheumatology practice. Several AI models have demonstrated high accuracy in early RA detection using imaging modalities such as thermal imaging and nuclear scans. Predictive models for treatment response have leveraged routinely collected electronic health record (EHR) data, moving closer to practical application in clinical workflows. Patient-facing tools like mobile symptom checkers and large language models (LLMs) such as ChatGPT show promise in enhancing education and engagement, although accuracy and safety remain variable. AI has also shown utility in identifying novel biomarkers and accelerating drug discovery. Despite these advances, as of early 2025, no AI-based tools have received FDA approval for use in rheumatology, in contrast to other specialties. Artificial intelligence holds tremendous promise to enhance clinical care in RA-from early diagnosis to personalized therapy. However, clinical adoption remains limited due to regulatory, technical, and implementation challenges. A streamlined regulatory framework and closer collaboration between clinicians, researchers, and industry partners are urgently needed. With thoughtful integration, AI can serve as a valuable adjunct in addressing clinical complexity and workforce shortages in rheumatology.

Comprehensive review of pulmonary embolism imaging: past, present and future innovations in computed tomography (CT) and other diagnostic techniques.

Triggiani S, Pellegrino G, Mortellaro S, Bubba A, Lanza C, Carriero S, Biondetti P, Angileri SA, Fusco R, Granata V, Carrafiello G

pubmed logopapersJun 28 2025
Pulmonary embolism (PE) remains a critical condition that demands rapid and accurate diagnosis, for which computed tomographic pulmonary angiography (CTPA) is widely recognized as the diagnostic gold standard. However, recent advancements in imaging technologies-such as dual-energy computed tomography (DECT), photon-counting CT (PCD-CT), and artificial intelligence (AI)-offer promising enhancements to traditional diagnostic methods. This study reviews past, current and emerging technologies, focusing on their potential to optimize diagnostic accuracy, reduce contrast volumes and radiation doses, and streamline clinical workflows. DECT, with its dual-energy imaging capabilities, enhances image clarity even with lower contrast media volumes, thus reducing patient risk. Meanwhile, PCD-CT has shown potential for dose reduction and superior image resolution, particularly in challenging cases. AI-based tools further augment diagnostic speed and precision by assisting radiologists in image analysis, consequently decreasing workloads and expediting clinical decision-making. Collectively, these innovations hold promise for improved clinical management of PE, enabling not only more accurate diagnoses but also safer, more efficient patient care. Further research is necessary to fully integrate these advancements into routine clinical practice, potentially redefining diagnostic workflows for PE and enhancing patient outcomes.

Automated Evaluation of Female Pelvic Organ Descent on Transperineal Ultrasound: Model Development and Validation.

Wu S, Wu J, Xu Y, Tan J, Wang R, Zhang X

pubmed logopapersJun 28 2025
Transperineal ultrasound (TPUS) is a widely used tool for evaluating female pelvic organ prolapse (POP), but its accurate interpretation relies on experience, causing diagnostic variability. This study aims to develop and validate a multi-task deep learning model to automate POP assessment using TPUS images. TPUS images from 1340 female patients (January-June 2023) were evaluated by two experienced physicians. The presence and severity of cystocele, uterine prolapse, rectocele, and excessive mobility of perineal body (EMoPB) were documented. After preprocessing, 1072 images were used for training and 268 for validation. The model used ResNet34 as the feature extractor and four parallel fully connected layers to predict the conditions. Model performance was assessed using confusion matrix and area under the curve (AUC). Gradient-weighted class activation mapping (Grad-CAM) visualized the model's focus areas. The model demonstrated strong diagnostic performance, with accuracies and AUC values as follows: cystocele, 0.869 (95% CI, 0.824-0.905) and 0.947 (95% CI, 0.930-0.962); uterine prolapse, 0.799 (95% CI, 0.746-0.842) and 0.931 (95% CI, 0.911-0.948); rectocele, 0.978 (95% CI, 0.952-0.990) and 0.892 (95% CI, 0.849-0.927); and EMoPB, 0.869 (95% CI, 0.824-0.905) and 0.942 (95% CI, 0.907-0.967). Grad-CAM heatmaps revealed that the model's focus areas were consistent with those observed by human experts. This study presents a multi-task deep learning model for automated POP assessment using TPUS images, showing promising efficacy and potential to benefit a broader population of women.

Developing ultrasound-based machine learning models for accurate differentiation between sclerosing adenosis and invasive ductal carcinoma.

Liu G, Yang N, Qu Y, Chen G, Wen G, Li G, Deng L, Mai Y

pubmed logopapersJun 28 2025
This study aimed to develop a machine learning model using breast ultrasound images to improve the non-invasive differential diagnosis between Sclerosing Adenosis (SA) and Invasive Ductal Carcinoma (IDC). 2046 ultrasound images from 772 SA and IDC patients were collected, Regions of Interest (ROI) were delineated, and features were extracted. The dataset was split into training and test cohorts, and feature selection was performed by correlation coefficients and Recursive Feature Elimination. 10 classifiers with Grid Search and 5-fold cross-validation were applied during model training. Receiver Operating Characteristic (ROC) curve and Youden index were used to model evaluation. SHapley Additive exPlanations (SHAP) was employed for model interpretation. Another 224 ROIs of 84 patients from other hospitals were used for external validation. For the ROI-level model, XGBoost with 18 features achieved an area under the curve (AUC) of 0.9758 (0.9654-0.9847) in the test cohort and 0.9906 (0.9805-0.9973) in the validation cohort. For the patient-level model, logistic regression with 9 features achieved an AUC of 0.9653 (0.9402-0.9859) in the test cohort and 0.9846 (0.9615-0.9978) in the validation cohort. The feature "Original shape Major Axis Length" was identified as the most important, with its value positively correlated with a higher likelihood of the sample being IDC. Feature contributions for specific ROIs were visualized as well. We developed explainable, ultrasound-based machine learning models with high performance for differentiating SA and IDC, offering a potential non-invasive tool for improved differential diagnosis. Question Accurately distinguishing between sclerosing adenosis (SA) and invasive ductal carcinoma (IDC) in a non-invasive manner has been a diagnostic challenge. Findings Explainable, ultrasound-based machine learning models with high performance were developed for differentiating SA and IDC, and validated well in external validation cohort. Critical relevance These models provide non-invasive tools to reduce misdiagnoses of SA and improve early detection for IDC.
Page 193 of 3993982 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.