Sort by:
Page 178 of 2442432 results

Biologically Inspired Deep Learning Approaches for Fetal Ultrasound Image Classification

Rinat Prochii, Elizaveta Dakhova, Pavel Birulin, Maxim Sharaev

arxiv logopreprintJun 10 2025
Accurate classification of second-trimester fetal ultrasound images remains challenging due to low image quality, high intra-class variability, and significant class imbalance. In this work, we introduce a simple yet powerful, biologically inspired deep learning ensemble framework that-unlike prior studies focused on only a handful of anatomical targets-simultaneously distinguishes 16 fetal structures. Drawing on the hierarchical, modular organization of biological vision systems, our model stacks two complementary branches (a "shallow" path for coarse, low-resolution cues and a "detailed" path for fine, high-resolution features), concatenating their outputs for final prediction. To our knowledge, no existing method has addressed such a large number of classes with a comparably lightweight architecture. We trained and evaluated on 5,298 routinely acquired clinical images (annotated by three experts and reconciled via Dawid-Skene), reflecting real-world noise and variability rather than a "cleaned" dataset. Despite this complexity, our ensemble (EfficientNet-B0 + EfficientNet-B6 with LDAM-Focal loss) identifies 90% of organs with accuracy > 0.75 and 75% of organs with accuracy > 0.85-performance competitive with more elaborate models applied to far fewer categories. These results demonstrate that biologically inspired modular stacking can yield robust, scalable fetal anatomy recognition in challenging clinical settings.

Artificial intelligence and endoanal ultrasound: pioneering automated differentiation of benign anal and sphincter lesions.

Mascarenhas M, Almeida MJ, Martins M, Mendes F, Mota J, Cardoso P, Mendes B, Ferreira J, Macedo G, Poças C

pubmed logopapersJun 10 2025
Anal injuries, such as lacerations and fissures, are challenging to diagnose because of their anatomical complexity. Endoanal ultrasound (EAUS) has proven to be a reliable tool for detailed visualization of anal structures but relies on expert interpretation. Artificial intelligence (AI) may offer a solution for more accurate and consistent diagnoses. This study aims to develop and test a convolutional neural network (CNN)-based algorithm for automatic classification of fissures and anal lacerations (internal and external) on EUAS. A single-center retrospective study analyzed 238 EUAS radial probe exams (April 2022-January 2024), categorizing 4528 frames into fissures (516), external lacerations (2174), and internal lacerations (1838), following validation by three experts. Data was split 80% for training and 20% for testing. Performance metrics included sensitivity, specificity, and accuracy. For external lacerations, the CNN achieved 82.5% sensitivity, 93.5% specificity, and 88.2% accuracy. For internal lacerations, achieved 91.7% sensitivity, 85.9% specificity, and 88.2% accuracy. For anal fissures, achieved 100% sensitivity, specificity, and accuracy. This first EUAS AI-assisted model for differentiating benign anal injuries demonstrates excellent diagnostic performance. It highlights AI's potential to improve accuracy, reduce reliance on expertise, and support broader clinical adoption. While currently limited by small dataset and single-center scope, this work represents a significant step towards integrating AI in proctology.

Foundation Models in Medical Imaging -- A Review and Outlook

Vivien van Veldhuizen, Vanessa Botha, Chunyao Lu, Melis Erdal Cesur, Kevin Groot Lipman, Edwin D. de Jong, Hugo Horlings, Clárisa I. Sanchez, Cees G. M. Snoek, Lodewyk Wessels, Ritse Mann, Eric Marcus, Jonas Teuwen

arxiv logopreprintJun 10 2025
Foundation models (FMs) are changing the way medical images are analyzed by learning from large collections of unlabeled data. Instead of relying on manually annotated examples, FMs are pre-trained to learn general-purpose visual features that can later be adapted to specific clinical tasks with little additional supervision. In this review, we examine how FMs are being developed and applied in pathology, radiology, and ophthalmology, drawing on evidence from over 150 studies. We explain the core components of FM pipelines, including model architectures, self-supervised learning methods, and strategies for downstream adaptation. We also review how FMs are being used in each imaging domain and compare design choices across applications. Finally, we discuss key challenges and open questions to guide future research.

Challenges and Advances in Classifying Brain Tumors: An Overview of Machine, Deep Learning, and Hybrid Approaches with Future Perspectives in Medical Imaging.

Alshomrani F

pubmed logopapersJun 10 2025
Accurate brain tumor classification is essential in neuro-oncology, as it directly informs treatment strategies and influences patient outcomes. This review comprehensively explores machine learning (ML) and deep learning (DL) models that enhance the accuracy and efficiency of brain tumor classification using medical imaging data, particularly Magnetic Resonance Imaging (MRI). As a noninvasive imaging technique, MRI plays a central role in detecting, segmenting, and characterizing brain tumors by providing detailed anatomical views that help distinguish various tumor types, including gliomas, meningiomas, and metastatic brain lesions. The review presents a detailed analysis of diverse ML approaches, from classical algorithms such as Support Vector Machines (SVM) and Decision Trees to advanced DL models, including Convolutional Neural Networks (CNN), Recurrent Neural Networks (RNN), and hybrid architectures that combine multiple techniques for improved performance. Through comparative analysis of recent studies across various datasets, the review evaluates these methods using metrics such as accuracy, sensitivity, specificity, and AUC-ROC, offering insights into their effectiveness and limitations. Significant challenges in the field are examined, including the scarcity of annotated datasets, computational complexity requirements, model interpretability issues, and barriers to clinical integration. The review proposes future directions to address these challenges, highlighting the potential of multi-modal imaging that combines MRI with other imaging modalities, explainable AI frameworks for enhanced model transparency, and privacy-preserving techniques for securing sensitive patient data. This comprehensive analysis demonstrates the transformative potential of ML and DL in advancing brain tumor diagnosis while emphasizing the necessity for continued research and innovation to overcome current limitations and ensure successful clinical implementation for improved patient care.

Machine learning is changing osteoporosis detection: an integrative review.

Zhang Y, Ma M, Huang X, Liu J, Tian C, Duan Z, Fu H, Huang L, Geng B

pubmed logopapersJun 10 2025
Machine learning drives osteoporosis detection and screening with higher clinical accuracy and accessibility than traditional osteoporosis screening tools. This review takes a step-by-step view of machine learning for osteoporosis detection, providing insights into today's osteoporosis detection and the outlook for the future. The early diagnosis and risk detection of osteoporosis have always been crucial and challenging issues in the medical field. With the in-depth application of artificial intelligence technology, especially machine learning technology in the medical field, significant breakthroughs have been made in the application of early diagnosis and risk detection of osteoporosis. Machine learning is a multidimensional technical system that encompasses a wide variety of algorithm types. Machine learning algorithms have become relatively mature and developed over many years in medical data processing. They possess stable and accurate detection performance, laying a solid foundation for the detection and diagnosis of osteoporosis. As an essential part of the machine learning technical system, deep-learning algorithms are complex algorithm models based on artificial neural networks. Due to their robust image recognition and feature extraction capabilities, deep learning algorithms have become increasingly mature in the early diagnosis and risk assessment of osteoporosis in recent years, opening new ideas and approaches for the early and accurate diagnosis and risk detection of osteoporosis. This paper reviewed the latest research over the past decade, ranging from relatively basic and widely adopted machine learning algorithms combined with clinical data to more advanced deep learning techniques integrated with imaging data such as X-ray, CT, and MRI. By analyzing the application of algorithms at different stages, we found that these basic machine learning algorithms performed well when dealing with single structured data but encountered limitations when handling high-dimensional and unstructured imaging data. On the other hand, deep learning can significantly improve detection accuracy. It does this by automatically extracting image features, especially in image histological analysis. However, it faces challenges. These include the "black-box" problem, heavy reliance on large amounts of labeled data, and difficulties in clinical interpretability. These issues highlighted the importance of model interpretability in future machine learning research. Finally, we expect to develop a predictive model in the future that combines multimodal data (such as clinical indicators, blood biochemical indicators, imaging data, and genetic data) integrated with electronic health records and machine learning techniques. This model aims to present a skeletal health monitoring system that is highly accessible, personalized, convenient, and efficient, furthering the early detection and prevention of osteoporosis.

Sex estimation from the variables of talocrural joint by using machine learning algorithms.

Ray A, Ray G, Kürtül İ, Şenol GT

pubmed logopapersJun 9 2025
This study has focused on sex determination from the variables estimated on X-ray images of the talocrural joint by using machine learning algorithms (ML). The variables of the mediolateral diameter of tibia (TMLD) and fibula (FMLD), the distance between the innermost points of the talocrural joint (DIT), the distance between the outermost points of the talocrural joint (DOT), and the distal articular surface of the tibia (TAS) estimated using X-ray images of 150 women and 150 men were evaluated by applying different ML methods. Logistic regression classifier, Decision Tree classifier, K-Nearest Neighbor classifier, Linear Discriminant Analysis, Naive Bayes and Random Forest classifier were used as algorithms. As a result of ML, an accuracy between 82 and 92 % was found. The highest rate of accuracy was achieved with RFC algorithm. DOT was the variable which contributed to the model at highest degree. Except for the variables of the age and FMLD, the other variables were found to be statistically significant in terms of sex difference. It was found that the variables of the talocrural joint were classified with high accuracy in terms of sex. In addition, morphometric data were found about the population and racial differences were emphasized.

HAIBU-ReMUD: Reasoning Multimodal Ultrasound Dataset and Model Bridging to General Specific Domains

Shijie Wang, Yilun Zhang, Zeyu Lai, Dexing Kong

arxiv logopreprintJun 9 2025
Multimodal large language models (MLLMs) have shown great potential in general domains but perform poorly in some specific domains due to a lack of domain-specific data, such as image-text data or vedio-text data. In some specific domains, there is abundant graphic and textual data scattered around, but lacks standardized arrangement. In the field of medical ultrasound, there are ultrasonic diagnostic books, ultrasonic clinical guidelines, ultrasonic diagnostic reports, and so on. However, these ultrasonic materials are often saved in the forms of PDF, images, etc., and cannot be directly used for the training of MLLMs. This paper proposes a novel image-text reasoning supervised fine-tuning data generation pipeline to create specific domain quadruplets (image, question, thinking trace, and answer) from domain-specific materials. A medical ultrasound domain dataset ReMUD is established, containing over 45,000 reasoning and non-reasoning supervised fine-tuning Question Answering (QA) and Visual Question Answering (VQA) data. The ReMUD-7B model, fine-tuned on Qwen2.5-VL-7B-Instruct, outperforms general-domain MLLMs in medical ultrasound field. To facilitate research, the ReMUD dataset, data generation codebase, and ReMUD-7B parameters will be released at https://github.com/ShiDaizi/ReMUD, addressing the data shortage issue in specific domain MLLMs.

A Dynamic Contrast-Enhanced MRI-Based Vision Transformer Model for Distinguishing HER2-Zero, -Low, and -Positive Expression in Breast Cancer and Exploring Model Interpretability.

Zhang X, Shen YY, Su GH, Guo Y, Zheng RC, Du SY, Chen SY, Xiao Y, Shao ZM, Zhang LN, Wang H, Jiang YZ, Gu YJ, You C

pubmed logopapersJun 9 2025
Novel antibody-drug conjugates highlight the benefits for breast cancer patients with low human epidermal growth factor receptor 2 (HER2) expression. This study aims to develop and validate a Vision Transformer (ViT) model based on dynamic contrast-enhanced MRI (DCE-MRI) to classify HER2-zero, -low, and -positive breast cancer patients and to explore its interpretability. The model is trained and validated on early enhancement MRI images from 708 patients in the FUSCC cohort and tested on 80 and 101 patients in the GFPH cohort and FHCMU cohort, respectively. The ViT model achieves AUCs of 0.80, 0.73, and 0.71 in distinguishing HER2-zero from HER2-low/positive tumors across the validation set of the FUSCC cohort and the two external cohorts. Furthermore, the model effectively classifies HER2-low and HER2-positive cases, with AUCs of 0.86, 0.80, and 0.79. Transcriptomics analysis identifies significant biological differences between HER2-low and HER2-positive patients, particularly in immune-related pathways, suggesting potential therapeutic targets. Additionally, Cox regression analysis demonstrates that the prediction score is an independent prognostic factor for overall survival (HR, 2.52; p = 0.007). These findings provide a non-invasive approach for accurately predicting HER2 expression, enabling more precise patient stratification to guide personalized treatment strategies. Further prospective studies are warranted to validate its clinical utility.

Automated detection of spinal bone marrow oedema in axial spondyloarthritis: training and validation using two large phase 3 trial datasets.

Jamaludin A, Windsor R, Ather S, Kadir T, Zisserman A, Braun J, Gensler LS, Østergaard M, Poddubnyy D, Coroller T, Porter B, Ligozio G, Readie A, Machado PM

pubmed logopapersJun 9 2025
To evaluate the performance of machine learning (ML) models for the automated scoring of spinal MRI bone marrow oedema (BMO) in patients with axial spondyloarthritis (axSpA) and compare them with expert scoring. ML algorithms using SpineNet software were trained and validated on 3483 spinal MRIs from 686 axSpA patients across two clinical trial datasets. The scoring pipeline involved (i) detection and labelling of vertebral bodies and (ii) classification of vertebral units for the presence or absence of BMO. Two models were tested: Model 1, without manual segmentation, and Model 2, incorporating an intermediate manual segmentation step. Model outputs were compared with those of human experts using kappa statistics, balanced accuracy, sensitivity, specificity, and AUC. Both models performed comparably to expert readers, regarding presence vs absence of BMO. Model 1 outperformed Model 2, with an AUC of 0.94 (vs 0.88), accuracy of 75.8% (vs 70.5%), and kappa of 0.50 (vs 0.31), using absolute reader consensus scoring as the external reference; this performance was similar to the expert inter-reader accuracy of 76.8% and kappa of 0.47, in a radiographic axSpA dataset. In a non-radiographic axSpA dataset, Model 1 achieved an AUC of 0.97 (vs 0.91 for Model 2), accuracy of 74.6% (vs 70%), and kappa of 0.52 (vs 0.27), comparable to the expert inter-reader accuracy of 74.2% and kappa of 0.46. ML software shows potential for automated MRI BMO assessment in axSpA, offering benefits such as improved consistency, reduced labour costs, and minimised inter- and intra-reader variability. Clinicaltrials.gov, MEASURE 1 study (NCT01358175); PREVENT study (NCT02696031).

Developing a Deep Learning Radiomics Model Combining Lumbar CT, Multi-Sequence MRI, and Clinical Data to Predict High-Risk Adjacent Segment Degeneration Following Lumbar Fusion: A Retrospective Multicenter Study.

Zou C, Wang T, Wang B, Fei Q, Song H, Zang L

pubmed logopapersJun 9 2025
Study designRetrospective cohort study.ObjectivesDevelop and validate a model combining clinical data, deep learning radiomics (DLR), and radiomic features from lumbar CT and multisequence MRI to predict high-risk patients for adjacent segment degeneration (ASDeg) post-lumbar fusion.MethodsThis study included 305 patients undergoing preoperative CT and MRI for lumbar fusion surgery, divided into training (n = 192), internal validation (n = 83), and external test (n = 30) cohorts. Vision Transformer 3D-based deep learning model was developed. LASSO regression was used for feature selection to establish a logistic regression model. ASDeg was defined as adjacent segment degeneration during radiological follow-up 6 months post-surgery. Fourteen machine learning algorithms were evaluated using ROC curves, and a combined model integrating clinical variables was developed.ResultsAfter feature selection, 21 radiomics, 12 DLR, and 3 clinical features were selected. The linear support vector machine algorithm performed best for the radiomic model, and AdaBoost was optimal for the DLR model. A combined model using these and clinical features was developed, with the multi-layer perceptron as the most effective algorithm. The areas under the curve for training, internal validation, and external test cohorts were 0.993, 0.936, and 0.835, respectively. The combined model outperformed the combined predictions of 2 surgeons.ConclusionsThis study developed and validated a combined model integrating clinical, DLR and radiomic features, demonstrating high predictive performance for identifying high-risk ASDeg patients post-lumbar fusion based on clinical data, CT, and MRI. The model could potentially reduce ASDeg-related revision surgeries, thereby reducing the burden on the public healthcare.
Page 178 of 2442432 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.