Sort by:
Page 55 of 1691682 results

Cognition-Eye-Brain Connection in Alzheimer's Disease Spectrum Revealed by Multimodal Imaging.

Shi Y, Shen T, Yan S, Liang J, Wei T, Huang Y, Gao R, Zheng N, Ci R, Zhang M, Tang X, Qin Y, Zhu W

pubmed logopapersJun 29 2025
The connection between cognition, eye, and brain remains inconclusive in Alzheimer's disease (AD) spectrum disorders. To explore the relationship between cognitive function, retinal biometrics, and brain alterations in the AD spectrum. Prospective. Healthy control (HC) (n = 16), subjective cognitive decline (SCD) (n = 35), mild cognitive impairment (MCI) (n = 18), and AD group (n = 7). 3-T, 3D T1-weighted Brain Volume (BRAVO) and resting-state functional MRI (fMRI). In all subgroups, cortical thickness was measured from BRAVO and segmented using the Desikan-Killiany-Tourville (DKT) atlas. The fractional amplitude of low-frequency fluctuations (FALFF) and regional homogeneity (ReHo) were measured in fMRI using voxel-based analysis. The eye was imaged by optical coherence tomography angiography (OCTA), with the deep learning model FARGO segmenting the foveal avascular zone (FAZ) and retinal vessels. FAZ area and perimeter, retinal blood vessels curvature (RBVC), thicknesses of the retinal nerve fiber layer (RNFL) and ganglion cell layer-inner plexiform layer (GCL-IPL) were calculated. Cognition-eye-brain associations were compared across the HC group and each AD spectrum stage using multivariable linear regression. Multivariable linear regression analysis. Statistical significance was set at p < 0.05 with FWE correction for fMRI and p < 1/62 (Bonferroni-corrected) for structural analyses. Reductions of FALFF in temporal regions, especially the left superior temporal gyrus (STG) in MCI patients, were linked to decreased RNFL thickness and increased FAZ area significantly. In AD patients, reduced ReHo values in occipital regions, especially the right middle occipital gyrus (MOG), were significantly associated with an enlarged FAZ area. The SCD group showed widespread cortical thickening significantly associated with all aforementioned retinal biometrics, with notable thickening in the right fusiform gyrus (FG) and right parahippocampal gyrus (PHG) correlating with reduced GCL-IPL thickness. Brain function and structure may be associated with cognition and retinal biometrics across the AD spectrum. Specifically, cognition-eye-brain connections may be present in SCD. 2. 3.

Inpainting is All You Need: A Diffusion-based Augmentation Method for Semi-supervised Medical Image Segmentation

Xinrong Hu, Yiyu Shi

arxiv logopreprintJun 28 2025
Collecting pixel-level labels for medical datasets can be a laborious and expensive process, and enhancing segmentation performance with a scarcity of labeled data is a crucial challenge. This work introduces AugPaint, a data augmentation framework that utilizes inpainting to generate image-label pairs from limited labeled data. AugPaint leverages latent diffusion models, known for their ability to generate high-quality in-domain images with low overhead, and adapts the sampling process for the inpainting task without need for retraining. Specifically, given a pair of image and label mask, we crop the area labeled with the foreground and condition on it during reversed denoising process for every noise level. Masked background area would gradually be filled in, and all generated images are paired with the label mask. This approach ensures the accuracy of match between synthetic images and label masks, setting it apart from existing dataset generation methods. The generated images serve as valuable supervision for training downstream segmentation models, effectively addressing the challenge of limited annotations. We conducted extensive evaluations of our data augmentation method on four public medical image segmentation datasets, including CT, MRI, and skin imaging. Results across all datasets demonstrate that AugPaint outperforms state-of-the-art label-efficient methodologies, significantly improving segmentation performance.

Deep Learning-Based Automated Detection of the Middle Cerebral Artery in Transcranial Doppler Ultrasound Examinations.

Lee H, Shi W, Mukaddim RA, Brunelle E, Palisetti A, Imaduddin SM, Rajendram P, Incontri D, Lioutas VA, Heldt T, Raju BI

pubmed logopapersJun 28 2025
Transcranial Doppler (TCD) ultrasound has significant clinical value for assessing cerebral hemodynamics, but its reliance on operator expertise limits broader clinical adoption. In this work, we present a lightweight real-time deep learning-based approach capable of automatically identifying the middle cerebral artery (MCA) in TCD Color Doppler images. Two state-of-the-art object detection models, YOLOv10 and Real-Time Detection Transformers (RT-DETR), were investigated for automated MCA detection in real-time. TCD Color Doppler data (41 subjects; 365 videos; 61,611 frames) were collected from neurologically healthy individuals (n = 31) and stroke patients (n = 10). MCA bounding box annotations were performed by clinical experts on all frames. Model training consisted of pretraining utilizing a large abdominal ultrasound dataset followed by subsequent fine-tuning on acquired TCD data. Detection performance at the instance and frame levels, and inference speed were assessed through four-fold cross-validation. Inter-rater agreement between model and two human expert readers was assessed using distance between bounding boxes and inter-rater variability was quantified using the individual equivalence coefficient (IEC) metric. Both YOLOv10 and RT-DETR models showed comparable frame level accuracy for MCA presence, with F1 scores of 0.884 ± 0.023 and 0.884 ± 0.019 respectively. YOLOv10 outperformed RT-DETR for instance-level localization accuracy (AP: 0.817 vs. 0.780) and had considerably faster inference speed on a desktop CPU (11.6 ms vs. 91.14 ms). Furthermore, YOLOv10 showed an average inference time of 36 ms per frame on a tablet device. The IEC was -1.08 with 95 % confidence interval: [-1.45, -0.19], showing that the AI predictions deviated less from each reader than the readers' annotations deviated from each other. Real-time automated detection of the MCA is feasible and can be implemented on mobile platforms, potentially enabling wider clinical adoption by less-trained operators in point-of-care settings.

Radio DINO: A foundation model for advanced radiomics and AI-driven medical imaging analysis.

Zedda L, Loddo A, Di Ruberto C

pubmed logopapersJun 28 2025
Radiomics is transforming medical imaging by extracting complex features that enhance disease diagnosis, prognosis, and treatment evaluation. However, traditional approaches face significant challenges, such as the need for manual feature engineering, high dimensionality, and limited sample sizes. This paper presents Radio DINO, a novel family of deep learning foundation models that leverage self-supervised learning (SSL) techniques from DINO and DINOV2, pretrained on the RadImageNet dataset. The novelty of our approach lies in (1) developing Radio DINO to capture rich semantic embeddings, enabling robust feature extraction without manual intervention, (2) demonstrating superior performance across various clinical tasks on the MedMNISTv2 dataset, surpassing existing models, and (3) enhancing the interpretability of the model by providing visualizations that highlight its focus on clinically relevant image regions. Our results show that Radio DINO has the potential to democratize advanced radiomics tools, making them accessible to healthcare institutions with limited resources and ultimately improving diagnostic and prognostic outcomes in radiology.

Novel Artificial Intelligence-Driven Infant Meningitis Screening From High-Resolution Ultrasound Imaging.

Sial HA, Carandell F, Ajanovic S, Jiménez J, Quesada R, Santos F, Buck WC, Sidat M, Bassat Q, Jobst B, Petrone P

pubmed logopapersJun 28 2025
Infant meningitis can be a life-threatening disease and requires prompt and accurate diagnosis to prevent severe outcomes or death. Gold-standard diagnosis requires lumbar puncture (LP) to obtain and analyze cerebrospinal fluid (CSF). Despite being standard practice, LPs are invasive, pose risks for the patient and often yield negative results, either due to contamination with red blood cells from the puncture itself or because LPs are routinely performed to rule out a life-threatening infection, despite the disease's relatively low incidence. Furthermore, in low-income settings where incidence is the highest, LPs and CSF exams are rarely feasible, and suspected meningitis cases are generally treated empirically. There is a growing need for non-invasive, accurate diagnostic methods. We developed a three-stage deep learning framework using Neosonics ultrasound technology for 30 infants with suspected meningitis and a permeable fontanelle at three Spanish University Hospitals (from 2021 to 2023). In stage 1, 2194 images were processed for quality control using a vessel/non-vessel model, with a focus on vessel identification and manual removal of images exhibiting artifacts such as poor coupling and clutter. This refinement process resulted in a final cohort comprising 16 patients-6 cases (336 images) and 10 controls (445 images), yielding 781 images for the second stage. The second stage involved the use of a deep learning model to classify images based on a white blood cell count threshold (set at 30 cells/mm<sup>3</sup>) into control or meningitis categories. The third stage integrated explainable artificial intelligence (XAI) methods, such as Grad-CAM visualizations, alongside image statistical analysis, to provide transparency and interpretability of the model's decision-making process in our artificial intelligence-driven screening tool. Our approach achieved 96% accuracy in quality control and 93% precision and 92% accuracy in image-level meningitis detection, with an overall patient-level accuracy of 94%. It identified 6 meningitis cases and 10 controls with 100% sensitivity and 90% specificity, demonstrating only a single misclassification. The use of gradient-weighted class activation mapping-based XAI significantly enhanced diagnostic interpretability, and to further refine our insights we incorporated a statistics-based XAI approach. By analyzing image metrics such as entropy and standard deviation, we identified texture variations in the images attributable to the presence of cells, which improved the interpretability of our diagnostic tool. This study supports the efficacy of a multi-stage deep learning model for non-invasive screening of infant meningitis and its potential to guide the need for LPs. It also highlights the transformative potential of artificial intelligence in medical diagnostic screening for neonatal health care, paving the way for future research and innovations.

Comparative analysis of iterative vs AI-based reconstruction algorithms in CT imaging for total body assessment: Objective and subjective clinical analysis.

Tucciariello RM, Botte M, Calice G, Cammarota A, Cammarota F, Capasso M, Nardo GD, Lancellotti MI, Palmese VP, Sarno A, Villonio A, Bianculli A

pubmed logopapersJun 28 2025
This study evaluates the performance of Iterative and AI-based Reconstruction algorithms in CT imaging for brain, chest, and upper abdomen assessments. Using a 320-slice CT scanner, phantom images were analysed through quantitative metrics such as Noise, Contrast-to-Noise-Ratio and Target Transfer Function. Additionally, five radiologists performed subjective evaluations on real patient images by scoring clinical parameters related to anatomical structures across the three body sites. The study aimed to relate results obtained with the typical approach related to parameters involved in medical physics using a Catphan physical phantom, with the evaluations assigned by the radiologists to the clinical parameters chosen in this study, and to determine whether the physical approach alone can ensure the implementation of new procedures and the optimization in clinical practice. AI-based algorithms demonstrated superior performance in chest and abdominal imaging, enhancing parenchymal and vascular detail with notable reductions in noise. However, their performance in brain imaging was less effective, as the aggressive noise reduction led to excessive smoothing, which affected diagnostic interpretability. Iterative reconstruction methods provided balanced results for brain imaging, preserving structural details and maintaining diagnostic clarity. The findings emphasize the need for region-specific optimization of reconstruction protocols. While AI-based methods can complement traditional IR techniques, they should not be assumed to inherently improve outcomes. A critical and cautious introduction of AI-based techniques is essential, ensuring radiologists adapt effectively without compromising diagnostic accuracy.

Emerging Artificial Intelligence Innovations in Rheumatoid Arthritis and Challenges to Clinical Adoption.

Gilvaz VJ, Sudheer A, Reginato AM

pubmed logopapersJun 28 2025
This review was written to inform practicing clinical rheumatologists about recent advances in artificial intelligence (AI) based research in rheumatoid arthritis (RA), using accessible and practical language. We highlight developments from 2023 to early 2025 across diagnostic imaging, treatment prediction, drug discovery, and patient-facing tools. Given the increasing clinical interest in AI and its potential to augment care delivery, this article aims to bridge the gap between technical innovation and real-world rheumatology practice. Several AI models have demonstrated high accuracy in early RA detection using imaging modalities such as thermal imaging and nuclear scans. Predictive models for treatment response have leveraged routinely collected electronic health record (EHR) data, moving closer to practical application in clinical workflows. Patient-facing tools like mobile symptom checkers and large language models (LLMs) such as ChatGPT show promise in enhancing education and engagement, although accuracy and safety remain variable. AI has also shown utility in identifying novel biomarkers and accelerating drug discovery. Despite these advances, as of early 2025, no AI-based tools have received FDA approval for use in rheumatology, in contrast to other specialties. Artificial intelligence holds tremendous promise to enhance clinical care in RA-from early diagnosis to personalized therapy. However, clinical adoption remains limited due to regulatory, technical, and implementation challenges. A streamlined regulatory framework and closer collaboration between clinicians, researchers, and industry partners are urgently needed. With thoughtful integration, AI can serve as a valuable adjunct in addressing clinical complexity and workforce shortages in rheumatology.

Automated Evaluation of Female Pelvic Organ Descent on Transperineal Ultrasound: Model Development and Validation.

Wu S, Wu J, Xu Y, Tan J, Wang R, Zhang X

pubmed logopapersJun 28 2025
Transperineal ultrasound (TPUS) is a widely used tool for evaluating female pelvic organ prolapse (POP), but its accurate interpretation relies on experience, causing diagnostic variability. This study aims to develop and validate a multi-task deep learning model to automate POP assessment using TPUS images. TPUS images from 1340 female patients (January-June 2023) were evaluated by two experienced physicians. The presence and severity of cystocele, uterine prolapse, rectocele, and excessive mobility of perineal body (EMoPB) were documented. After preprocessing, 1072 images were used for training and 268 for validation. The model used ResNet34 as the feature extractor and four parallel fully connected layers to predict the conditions. Model performance was assessed using confusion matrix and area under the curve (AUC). Gradient-weighted class activation mapping (Grad-CAM) visualized the model's focus areas. The model demonstrated strong diagnostic performance, with accuracies and AUC values as follows: cystocele, 0.869 (95% CI, 0.824-0.905) and 0.947 (95% CI, 0.930-0.962); uterine prolapse, 0.799 (95% CI, 0.746-0.842) and 0.931 (95% CI, 0.911-0.948); rectocele, 0.978 (95% CI, 0.952-0.990) and 0.892 (95% CI, 0.849-0.927); and EMoPB, 0.869 (95% CI, 0.824-0.905) and 0.942 (95% CI, 0.907-0.967). Grad-CAM heatmaps revealed that the model's focus areas were consistent with those observed by human experts. This study presents a multi-task deep learning model for automated POP assessment using TPUS images, showing promising efficacy and potential to benefit a broader population of women.

Developing ultrasound-based machine learning models for accurate differentiation between sclerosing adenosis and invasive ductal carcinoma.

Liu G, Yang N, Qu Y, Chen G, Wen G, Li G, Deng L, Mai Y

pubmed logopapersJun 28 2025
This study aimed to develop a machine learning model using breast ultrasound images to improve the non-invasive differential diagnosis between Sclerosing Adenosis (SA) and Invasive Ductal Carcinoma (IDC). 2046 ultrasound images from 772 SA and IDC patients were collected, Regions of Interest (ROI) were delineated, and features were extracted. The dataset was split into training and test cohorts, and feature selection was performed by correlation coefficients and Recursive Feature Elimination. 10 classifiers with Grid Search and 5-fold cross-validation were applied during model training. Receiver Operating Characteristic (ROC) curve and Youden index were used to model evaluation. SHapley Additive exPlanations (SHAP) was employed for model interpretation. Another 224 ROIs of 84 patients from other hospitals were used for external validation. For the ROI-level model, XGBoost with 18 features achieved an area under the curve (AUC) of 0.9758 (0.9654-0.9847) in the test cohort and 0.9906 (0.9805-0.9973) in the validation cohort. For the patient-level model, logistic regression with 9 features achieved an AUC of 0.9653 (0.9402-0.9859) in the test cohort and 0.9846 (0.9615-0.9978) in the validation cohort. The feature "Original shape Major Axis Length" was identified as the most important, with its value positively correlated with a higher likelihood of the sample being IDC. Feature contributions for specific ROIs were visualized as well. We developed explainable, ultrasound-based machine learning models with high performance for differentiating SA and IDC, offering a potential non-invasive tool for improved differential diagnosis. Question Accurately distinguishing between sclerosing adenosis (SA) and invasive ductal carcinoma (IDC) in a non-invasive manner has been a diagnostic challenge. Findings Explainable, ultrasound-based machine learning models with high performance were developed for differentiating SA and IDC, and validated well in external validation cohort. Critical relevance These models provide non-invasive tools to reduce misdiagnoses of SA and improve early detection for IDC.

Identifying visible tissue in intraoperative ultrasound: a method and application.

Weld A, Dixon L, Dyck M, Anichini G, Ranne A, Camp S, Giannarou S

pubmed logopapersJun 28 2025
Intraoperative ultrasound scanning is a demanding visuotactile task. It requires operators to simultaneously localise the ultrasound perspective and manually perform slight adjustments to the pose of the probe, making sure not to apply excessive force or breaking contact with the tissue, while also characterising the visible tissue. To analyse the probe-tissue contact, an iterative filtering and topological method is proposed to identify the underlying visible tissue, which can be used to detect acoustic shadow and construct confidence maps of perceptual salience. For evaluation, datasets containing both in vivo and medical phantom data are created. A suite of evaluations is performed, including an evaluation of acoustic shadow classification. Compared to an ablation, deep learning, and statistical method, the proposed approach achieves superior classification on in vivo data, achieving an <math xmlns="http://www.w3.org/1998/Math/MathML"><msub><mi>F</mi> <mi>β</mi></msub> </math> score of 0.864, in comparison with 0.838, 0.808, and 0.808. A novel framework for evaluating the confidence estimation of probe-tissue contact is created. The phantom data are captured specifically for this, and comparison is made against two established methods. The proposed method produced the superior response, achieving an average normalised root-mean-square error of 0.168, in comparison with 1.836 and 4.542. Evaluation is also extended to determine the algorithm's robustness to parameter perturbation, speckle noise, data distribution shift, and capability for guiding a robotic scan. The results of this comprehensive set of experiments justify the potential clinical value of the proposed algorithm, which can be used to support clinical training and robotic ultrasound automation.
Page 55 of 1691682 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.