Sort by:
Page 94 of 2382377 results

Advancing Fetal Ultrasound Image Quality Assessment in Low-Resource Settings

Dongli He, Hu Wang, Mohammad Yaqub

arxiv logopreprintJul 30 2025
Accurate fetal biometric measurements, such as abdominal circumference, play a vital role in prenatal care. However, obtaining high-quality ultrasound images for these measurements heavily depends on the expertise of sonographers, posing a significant challenge in low-income countries due to the scarcity of trained personnel. To address this issue, we leverage FetalCLIP, a vision-language model pretrained on a curated dataset of over 210,000 fetal ultrasound image-caption pairs, to perform automated fetal ultrasound image quality assessment (IQA) on blind-sweep ultrasound data. We introduce FetalCLIP$_{CLS}$, an IQA model adapted from FetalCLIP using Low-Rank Adaptation (LoRA), and evaluate it on the ACOUSLIC-AI dataset against six CNN and Transformer baselines. FetalCLIP$_{CLS}$ achieves the highest F1 score of 0.757. Moreover, we show that an adapted segmentation model, when repurposed for classification, further improves performance, achieving an F1 score of 0.771. Our work demonstrates how parameter-efficient fine-tuning of fetal ultrasound foundation models can enable task-specific adaptations, advancing prenatal care in resource-limited settings. The experimental code is available at: https://github.com/donglihe-hub/FetalCLIP-IQA.

Whole-brain Transferable Representations from Large-Scale fMRI Data Improve Task-Evoked Brain Activity Decoding

Yueh-Po Peng, Vincent K. M. Cheung, Li Su

arxiv logopreprintJul 30 2025
A fundamental challenge in neuroscience is to decode mental states from brain activity. While functional magnetic resonance imaging (fMRI) offers a non-invasive approach to capture brain-wide neural dynamics with high spatial precision, decoding from fMRI data -- particularly from task-evoked activity -- remains challenging due to its high dimensionality, low signal-to-noise ratio, and limited within-subject data. Here, we leverage recent advances in computer vision and propose STDA-SwiFT, a transformer-based model that learns transferable representations from large-scale fMRI datasets via spatial-temporal divided attention and self-supervised contrastive learning. Using pretrained voxel-wise representations from 995 subjects in the Human Connectome Project (HCP), we show that our model substantially improves downstream decoding performance of task-evoked activity across multiple sensory and cognitive domains, even with minimal data preprocessing. We demonstrate performance gains from larger receptor fields afforded by our memory-efficient attention mechanism, as well as the impact of functional relevance in pretraining data when fine-tuning on small samples. Our work showcases transfer learning as a viable approach to harness large-scale datasets to overcome challenges in decoding brain activity from fMRI data.

Structural MRI-based Computer-aided Diagnosis Models for Alzheimer Disease: Insights into Misclassifications and Diagnostic Limitations.

Kang X, Lin J, Zhao K, Yan S, Chen P, Wang D, Yao H, Zhou B, Yu C, Wang P, Liao Z, Chen Y, Zhang X, Han Y, Lu J, Liu Y

pubmed logopapersJul 30 2025
<i>"Just Accepted" papers have undergone full peer review and have been accepted for publication in <i>Radiology: Artificial Intelligence</i>. This article will undergo copyediting, layout, and proof review before it is published in its final version. Please note that during production of the final copyedited article, errors may be discovered which could affect the content.</i> Purpose To examine common patterns among different computer-aided diagnosis (CAD) models for Alzheimer's disease (AD) using structural MRI data and to characterize the clinical and imaging features associated with their misclassifications. Materials and Methods This retrospective study utilized 3258 baseline structural MRIs from five multisite datasets and two multidisease datasets collected between September 2005 and December 2019. The 3D Nested Hierarchical Transformer (3DNesT) model and other CAD techniques were utilized for AD classification using 10-fold cross-validation and cross-dataset validation. Subgroup analysis of CAD-misclassified individuals compared clinical/neuroimaging biomarkers using independent <i>t</i> tests with Bonferroni correction. Results This study included 1391 patients with AD (mean age, 72.1 ± 9.2 years, 757 female), 205 with other neurodegenerative diseases (mean age, 64.9 ± 9.9 years, 117 male), and 1662 healthy controls (mean age, 70.6 ± 7.6 years, 935 female). The 3DNesT model achieved 90.1 ± 2.3% crossvalidation accuracy and 82.2%, 90.1%, and 91.6% in three external datasets. Further analysis suggested that false negative (FN) subgroup (<i>n</i> = 223) exhibited minimal atrophy and better cognitive performance than true positive (TP) subgroup (MMSE, FN, 21.4 ± 4.4; TP, 19.7 ± 5.7; <i>P<sub>FWE</sub></i> < 0.001), despite displaying similar levels of amyloid beta (FN, 705.9 ± 353.9; TP, 665.7 ± 305.8; <i>P<sub>FWE</sub></i> = 0.47), Tau (FN, 352.4 ± 166.8; TP, 371.0 ± 141.8; <i>P<sub>FWE</sub></i> = 0.47) burden. Conclusion FN subgroup exhibited atypical structural MRI patterns and clinical measures, fundamentally limiting the diagnostic performance of CAD models based solely on structural MRI. ©RSNA, 2025.

Learning from Heterogeneous Structural MRI via Collaborative Domain Adaptation for Late-Life Depression Assessment

Yuzhen Gao, Qianqian Wang, Yongheng Sun, Cui Wang, Yongquan Liang, Mingxia Liu

arxiv logopreprintJul 30 2025
Accurate identification of late-life depression (LLD) using structural brain MRI is essential for monitoring disease progression and facilitating timely intervention. However, existing learning-based approaches for LLD detection are often constrained by limited sample sizes (e.g., tens), which poses significant challenges for reliable model training and generalization. Although incorporating auxiliary datasets can expand the training set, substantial domain heterogeneity, such as differences in imaging protocols, scanner hardware, and population demographics, often undermines cross-domain transferability. To address this issue, we propose a Collaborative Domain Adaptation (CDA) framework for LLD detection using T1-weighted MRIs. The CDA leverages a Vision Transformer (ViT) to capture global anatomical context and a Convolutional Neural Network (CNN) to extract local structural features, with each branch comprising an encoder and a classifier. The CDA framework consists of three stages: (a) supervised training on labeled source data, (b) self-supervised target feature adaptation and (c) collaborative training on unlabeled target data. We first train ViT and CNN on source data, followed by self-supervised target feature adaptation by minimizing the discrepancy between classifier outputs from two branches to make the categorical boundary clearer. The collaborative training stage employs pseudo-labeled and augmented target-domain MRIs, enforcing prediction consistency under strong and weak augmentation to enhance domain robustness and generalization. Extensive experiments conducted on multi-site T1-weighted MRI data demonstrate that the CDA consistently outperforms state-of-the-art unsupervised domain adaptation methods.

Role of Artificial Intelligence in Surgical Training by Assessing GPT-4 and GPT-4o on the Japan Surgical Board Examination With Text-Only and Image-Accompanied Questions: Performance Evaluation Study.

Maruyama H, Toyama Y, Takanami K, Takase K, Kamei T

pubmed logopapersJul 30 2025
Artificial intelligence and large language models (LLMs)-particularly GPT-4 and GPT-4o-have demonstrated high correct-answer rates in medical examinations. GPT-4o has enhanced diagnostic capabilities, advanced image processing, and updated knowledge. Japanese surgeons face critical challenges, including a declining workforce, regional health care disparities, and work-hour-related challenges. Nonetheless, although LLMs could be beneficial in surgical education, no studies have yet assessed GPT-4o's surgical knowledge or its performance in the field of surgery. This study aims to evaluate the potential of GPT-4 and GPT-4o in surgical education by using them to take the Japan Surgical Board Examination (JSBE), which includes both textual questions and medical images-such as surgical and computed tomography scans-to comprehensively assess their surgical knowledge. We used 297 multiple-choice questions from the 2021-2023 JSBEs. The questions were in Japanese, and 104 of them included images. First, the GPT-4 and GPT-4o responses to only the textual questions were collected via OpenAI's application programming interface to evaluate their correct-answer rate. Subsequently, the correct-answer rate of their responses to questions that included images was assessed by inputting both text and images. The overall correct-answer rates of GPT-4o and GPT-4 for the text-only questions were 78% (231/297) and 55% (163/297), respectively, with GPT-4o outperforming GPT-4 by 23% (P=<.01). By contrast, there was no significant improvement in the correct-answer rate for questions that included images compared with the results for the text-only questions. GPT-4o outperformed GPT-4 on the JSBE. However, the results of the LLMs were lower than those of the examinees. Despite the capabilities of LLMs, image recognition remains a challenge for them, and their clinical application requires caution owing to the potential inaccuracy of their results.

Optimizing Thyroid Nodule Management With Artificial Intelligence: Multicenter Retrospective Study on Reducing Unnecessary Fine Needle Aspirations.

Ni JH, Liu YY, Chen C, Shi YL, Zhao X, Li XL, Ye BB, Hu JL, Mou LC, Sun LP, Fu HJ, Zhu XX, Zhang YF, Guo L, Xu HX

pubmed logopapersJul 30 2025
Most artificial intelligence (AI) models for thyroid nodules are designed to screen for malignancy to guide further interventions; however, these models have not yet been fully implemented in clinical practice. This study aimed to evaluate AI in real clinical settings for identifying potentially benign thyroid nodules initially deemed to be at risk for malignancy by radiologists, reducing unnecessary fine needle aspiration (FNA) and optimizing management. We retrospectively collected a validation cohort of thyroid nodules that had undergone FNA. These nodules were initially assessed as "suspicious for malignancy" by radiologists based on ultrasound features, following standard clinical practice, which prompted further FNA procedures. Ultrasound images of these nodules were re-evaluated using a deep learning-based AI system, and its diagnostic performance was assessed in terms of correct identification of benign nodules and error identification of malignant nodules. Performance metrics such as sensitivity, specificity, and the area under the receiver operating characteristic curve were calculated. In addition, a separate comparison cohort was retrospectively assembled to compare the AI system's ability to correctly identify benign thyroid nodules with that of radiologists. The validation cohort comprised 4572 thyroid nodules (benign: n=3134, 68.5%; malignant: n=1438, 31.5%). AI correctly identified 2719 (86.8% among benign nodules) and reduced unnecessary FNAs from 68.5% (3134/4572) to 9.1% (415/4572). However, 123 malignant nodules (8.6% of malignant cases) were mistakenly identified as benign, with the majority of these being of low or intermediate suspicion. In the comparison cohort, AI successfully identified 81.4% (96/118) of benign nodules. It outperformed junior and senior radiologists, who identified only 40% and 55%, respectively. The area under the curve (AUC) for the AI model was 0.88 (95% CI 0.85-0.91), demonstrating a superior AUC compared with that of the junior radiologists (AUC=0.43, 95% CI 0.36-0.50; P=.002) and senior radiologists (AUC=0.63, 95% CI 0.55-0.70; P=.003). Compared with radiologists, AI can better serve as a "goalkeeper" in reducing unnecessary FNAs by identifying benign nodules that are initially assessed as malignant by radiologists. However, active surveillance is still necessary for all these nodules since a very small number of low-aggressiveness malignant nodules may be mistakenly identified.

Fine-grained Prototype Network for MRI Sequence Classification.

Yuan C, Jia X, Wang L, Yang C

pubmed logopapersJul 30 2025
Magnetic Resonance Imaging (MRI) is a crucial method for clinical diagnosis. Different abdominal MRI sequences provide tissue and structural information from various perspectives, offering reliable evidence for doctors to make accurate diagnoses. In recent years, with the rapid development of intelligent medical imaging, some studies have begun exploring deep learning methods for MRI sequence recognition. However, due to the significant intra-class variations and subtle inter-class differences in MRI sequences, traditional deep learning algorithms still struggle to effectively handle such types of complex distributed data. In addition, the key features for identifying MRI sequence categories often exist in subtle details, while significant discrepancies can be observed among sequences from individual samples. In contrast, current deep learning based MRI sequence classification methods tend to overlook these fine-grained differences across diverse samples. To overcome the above challenges, this paper proposes a fine-grained prototype network, SequencesNet, for MRI sequence classification. A network combining convolutional neural networks (CNNs) with improved vision transformers is constructed for feature extraction, considering both local and global information. Specifically, a Feature Selection Module (FSM) is added to the visual transformer, and fine-grained features for sequence discrimination are selected based on fused attention weights from multiple layers. Then, a Prototype Classification Module (PCM) is proposed to classify MRI sequences based on fine-grained MRI representations. Comprehensive experiments are conducted on a public abdominal MRI sequence classification dataset and a private dataset. Our proposed SequencesNet achieved the highest accuracy with 96.73% and 95.98% in two sequence classification datasets, respectively, and outperfom the comparative prototypes and fine-grained models. The visualization results exhibit that our proposed sequencesNet can better capture fine-grained information. The proposed SequencesNet shows promising performance in MRI sequence classification, excelling in distinguishing subtle inter-class differences and handling large intra-class variability. Specifically, FSM enhances clinical interpretability by focusing on fine-grained features, and PCM improves clustering by optimizing prototype-sample distances. Compared to baselines like 3DResNet18 and TransFG, SequencesNet achieves higher recall and precision, particularly for similar sequences like DCE-LAP and DCE-PVP. The proposed new MRI sequence classification model, SequencesNet, addresses the problem of subtle inter-class differences and significant intraclass variations existing in medical images. The modular design of SequencesNet can be extended to other medical imaging tasks, including but not limited to multimodal image fusion, lesion detection, and disease staging. Future work can be done to decrease the computational complexity and increase the generalization of the model.

Advancing Alzheimer's Diagnosis with AI-Enhanced MRI: A Review of Challenges and Implications.

Batool Z, Hu S, Kamal MA, Greig NH, Shen B

pubmed logopapersJul 30 2025
Neurological disorders are marked by neurodegeneration, leading to impaired cognition, psychosis, and mood alterations. These symptoms are typically associated with functional changes in both emotional and cognitive processes, which are often correlated with anatomical variations in the brain. Hence, brain structural magnetic resonance imaging (MRI) data have become a critical focus in research, particularly for predictive modeling. The involvement of large MRI data consortia, such as the Alzheimer's Disease Neuroimaging Initiative (ADNI), has facilitated numerous MRI-based classification studies utilizing advanced artificial intelligence models. Among these, convolutional neural networks (CNNs) and non-convolutional artificial neural networks (NC-ANNs) have been prominently employed for brain image processing tasks. These deep learning models have shown significant promise in enhancing the predictive performance for the diagnosis of neurological disorders, with a particular emphasis on Alzheimer's disease (AD). This review aimed to provide a comprehensive summary of these deep learning studies, critically evaluating their methodologies and outcomes. By categorizing the studies into various sub-fields, we aimed to highlight the strengths and limitations of using MRI-based deep learning approaches for diagnosing brain disorders. Furthermore, we discussed the potential implications of these advancements in clinical practice, considering the challenges and future directions for improving diagnostic accuracy and patient outcomes. Through this detailed analysis, we seek to contribute to the ongoing efforts in harnessing AI for better understanding and management of AD.

Deep Learning for the Diagnosis and Treatment of Thyroid Cancer: A Review.

Gao R, Mai S, Wang S, Hu W, Chang Z, Wu G, Guan H

pubmed logopapersJul 30 2025
In recent years, the application of deep learning (DL) technology in the thyroid field has shown exponential growth, greatly promoting innovation in thyroid disease research. As the most common malignant tumor of the endocrine system, the precise diagnosis and treatment of thyroid cancer has been a key focus of clinical research. This article systematically reviews the latest research progress in DL research for the diagnosis and treatment of thyroid malignancies, focusing on the breakthrough application of advanced models such as convolutional neural networks (CNNs), long short-term memory networks (LSTMs), and generative adversarial networks (GANs) in key areas such as ultrasound images analysis for thyroid nodules, automatic classification of pathological images, and assessment of extrathyroidal extension. Furthermore, the review highlights the great potential of DL techniques in the development of individualized treatment planning and prognosis prediction. In addition, it analyzes the technical bottlenecks and clinical challenges faced by current DL applications in thyroid cancer diagnosis and treatment and looks ahead to future directions for development. The aim of this review is to provide the latest research insights for clinical practitioners, promote further improvements in the precision diagnosis and treatment system for thyroid cancer, and ultimately achieve better diagnostic and therapeutic outcomes for thyroid cancer patients.

Feature Selection in Healthcare Datasets: Towards a Generalizable Solution.

Maruotto I, Ciliberti FK, Gargiulo P, Recenti M

pubmed logopapersJul 29 2025
The increasing dimensionality of healthcare datasets presents major challenges for clinical data analysis and interpretation. This study introduces a scalable ensemble feature selection (FS) strategy optimized for multi-biometric healthcare datasets aiming to: address the need for dimensionality reduction, identify the most significant features, improve machine learning models' performance, and enhance interpretability in a clinical context. The novel waterfall selection, that integrates sequentially (a) tree-based feature ranking and (b) greedy backward feature elimination, produces as output several sets of features. These subsets are then combined using a specific merging strategy to produce a single set of clinically relevant features. The overall method is applied to two healthcare datasets: the biosignal-based BioVRSea dataset, containing electromyography, electroencephalography, and center-of-pressure data for postural control and motion sickness assessment, and the image-based SinPain dataset, which includes MRI and CT-scan data to study knee osteoarthritis. Our ensemble FS approach demonstrated effective dimensionality reduction, achieving over a 50% decrease in certain feature subsets. The new reduced feature set maintained or improved the model classification metrics when tested with Support Vector Machine and Random Forest models. The proposed ensemble FS method retains selected features essential for distinguishing clinical outcomes, leading to models that are both computationally efficient and clinically interpretable. Furthermore, the adaptability of this method across two heterogeneous healthcare datasets and the scalability of the algorithm indicates its potential as a generalizable tool in healthcare studies. This approach can advance clinical decision support systems, making high-dimensional healthcare datasets more accessible and clinically interpretable.
Page 94 of 2382377 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.