Sort by:
Page 280 of 3393381 results

tUbe net: a generalisable deep learning tool for 3D vessel segmentation

Holroyd, N. A., Li, Z., Walsh, C., Brown, E. E., Shipley, R. J., Walker-Samuel, S.

biorxiv logopreprintMay 26 2025
Deep learning has become an invaluable tool for bioimage analysis but, while open-source cell annotation software such as cellpose are widely used, an equivalent tool for three-dimensional (3D) vascular annotation does not exist. With the vascular system being directly impacted by a broad range of diseases, there is significant medical interest in quantitative analysis for vascular imaging. However, existing deep learning approaches for this task are specialised to particular tissue types or imaging modalities. We present a new deep learning model for segmentation of vasculature that is generalisable across tissues, modalities, scales and pathologies. To create a generalisable model, a 3D convolutional neural network was trained using data from multiple modalities including optical imaging, computational tomography and photoacoustic imaging. Through this varied training set, the model was forced to learn common features of vessels cross-modality and scale. Following this, the general model was fine-tuned to different applications with a minimal amount of manually labelled ground truth data. It was found that the general model could be specialised to segment new datasets, with a high degree of accuracy, using as little as 0.3% of the volume of that dataset for fine-tuning. As such, this model enables users to produce accurate segmentations of 3D vascular networks without the need to label large amounts of training data.

Evolution of deep learning tooth segmentation from CT/CBCT images: a systematic review and meta-analysis.

Kot WY, Au Yeung SY, Leung YY, Leung PH, Yang WF

pubmed logopapersMay 26 2025
Deep learning has been utilized to segment teeth from computed tomography (CT) or cone-beam CT (CBCT). However, the performance of deep learning is unknown due to multiple models and diverse evaluation metrics. This systematic review and meta-analysis aims to evaluate the evolution and performance of deep learning in tooth segmentation. We systematically searched PubMed, Web of Science, Scopus, IEEE Xplore, arXiv.org, and ACM for studies investigating deep learning in human tooth segmentation from CT/CBCT. Included studies were assessed using the Quality Assessment of Diagnostic Accuracy Study (QUADAS-2) tool. Data were extracted for meta-analyses by random-effects models. A total of 30 studies were included in the systematic review, and 28 of them were included for meta-analyses. Various deep learning algorithms were categorized according to the backbone network, encompassing single-stage convolutional models, convolutional models with U-Net architecture, Transformer models, convolutional models with attention mechanisms, and combinations of multiple models. Convolutional models with U-Net architecture were the most commonly used deep learning algorithms. The integration of attention mechanism within convolutional models has become a new topic. 29 evaluation metrics were identified, with Dice Similarity Coefficient (DSC) being the most popular. The pooled results were 0.93 [0.93, 0.93] for DSC, 0.86 [0.85, 0.87] for Intersection over Union (IoU), 0.22 [0.19, 0.24] for Average Symmetric Surface Distance (ASSD), 0.92 [0.90, 0.94] for sensitivity, 0.71 [0.26, 1.17] for 95% Hausdorff distance, and 0.96 [0.93, 0.98] for precision. No significant difference was observed in the segmentation of single-rooted or multi-rooted teeth. No obvious correlation between sample size and segmentation performance was observed. Multiple deep learning algorithms have been successfully applied to tooth segmentation from CT/CBCT and their evolution has been well summarized and categorized according to their backbone structures. In future, studies are needed with standardized protocols and open labelled datasets.

Rep3D: Re-parameterize Large 3D Kernels with Low-Rank Receptive Modeling for Medical Imaging

Ho Hin Lee, Quan Liu, Shunxing Bao, Yuankai Huo, Bennett A. Landman

arxiv logopreprintMay 26 2025
In contrast to vision transformers, which model long-range dependencies through global self-attention, large kernel convolutions provide a more efficient and scalable alternative, particularly in high-resolution 3D volumetric settings. However, naively increasing kernel size often leads to optimization instability and degradation in performance. Motivated by the spatial bias observed in effective receptive fields (ERFs), we hypothesize that different kernel elements converge at variable rates during training. To support this, we derive a theoretical connection between element-wise gradients and first-order optimization, showing that structurally re-parameterized convolution blocks inherently induce spatially varying learning rates. Building on this insight, we introduce Rep3D, a 3D convolutional framework that incorporates a learnable spatial prior into large kernel training. A lightweight two-stage modulation network generates a receptive-biased scaling mask, adaptively re-weighting kernel updates and enabling local-to-global convergence behavior. Rep3D adopts a plain encoder design with large depthwise convolutions, avoiding the architectural complexity of multi-branch compositions. We evaluate Rep3D on five challenging 3D segmentation benchmarks and demonstrate consistent improvements over state-of-the-art baselines, including transformer-based and fixed-prior re-parameterization methods. By unifying spatial inductive bias with optimization-aware learning, Rep3D offers an interpretable, and scalable solution for 3D medical image analysis. The source code is publicly available at https://github.com/leeh43/Rep3D.

Advancements in Medical Image Classification through Fine-Tuning Natural Domain Foundation Models

Mobina Mansoori, Sajjad Shahabodini, Farnoush Bayatmakou, Jamshid Abouei, Konstantinos N. Plataniotis, Arash Mohammadi

arxiv logopreprintMay 26 2025
Using massive datasets, foundation models are large-scale, pre-trained models that perform a wide range of tasks. These models have shown consistently improved results with the introduction of new methods. It is crucial to analyze how these trends impact the medical field and determine whether these advancements can drive meaningful change. This study investigates the application of recent state-of-the-art foundation models, DINOv2, MAE, VMamba, CoCa, SAM2, and AIMv2, for medical image classification. We explore their effectiveness on datasets including CBIS-DDSM for mammography, ISIC2019 for skin lesions, APTOS2019 for diabetic retinopathy, and CHEXPERT for chest radiographs. By fine-tuning these models and evaluating their configurations, we aim to understand the potential of these advancements in medical image classification. The results indicate that these advanced models significantly enhance classification outcomes, demonstrating robust performance despite limited labeled data. Based on our results, AIMv2, DINOv2, and SAM2 models outperformed others, demonstrating that progress in natural domain training has positively impacted the medical domain and improved classification outcomes. Our code is publicly available at: https://github.com/sajjad-sh33/Medical-Transfer-Learning.

An Explainable Diagnostic Framework for Neurodegenerative Dementias via Reinforcement-Optimized LLM Reasoning

Andrew Zamai, Nathanael Fijalkow, Boris Mansencal, Laurent Simon, Eloi Navet, Pierrick Coupe

arxiv logopreprintMay 26 2025
The differential diagnosis of neurodegenerative dementias is a challenging clinical task, mainly because of the overlap in symptom presentation and the similarity of patterns observed in structural neuroimaging. To improve diagnostic efficiency and accuracy, deep learning-based methods such as Convolutional Neural Networks and Vision Transformers have been proposed for the automatic classification of brain MRIs. However, despite their strong predictive performance, these models find limited clinical utility due to their opaque decision making. In this work, we propose a framework that integrates two core components to enhance diagnostic transparency. First, we introduce a modular pipeline for converting 3D T1-weighted brain MRIs into textual radiology reports. Second, we explore the potential of modern Large Language Models (LLMs) to assist clinicians in the differential diagnosis between Frontotemporal dementia subtypes, Alzheimer's disease, and normal aging based on the generated reports. To bridge the gap between predictive accuracy and explainability, we employ reinforcement learning to incentivize diagnostic reasoning in LLMs. Without requiring supervised reasoning traces or distillation from larger models, our approach enables the emergence of structured diagnostic rationales grounded in neuroimaging findings. Unlike post-hoc explainability methods that retrospectively justify model decisions, our framework generates diagnostic rationales as part of the inference process-producing causally grounded explanations that inform and guide the model's decision-making process. In doing so, our framework matches the diagnostic performance of existing deep learning methods while offering rationales that support its diagnostic conclusions.

Automated landmark-based mid-sagittal plane: reliability for 3-dimensional mandibular asymmetry assessment on head CT scans.

Alt S, Gajny L, Tilotta F, Schouman T, Dot G

pubmed logopapersMay 26 2025
The determination of the mid-sagittal plane (MSP) on three-dimensional (3D) head imaging is key to the assessment of facial asymmetry. The aim of this study was to evaluate the reliability of an automated landmark-based MSP to quantify mandibular asymmetry on head computed tomography (CT) scans. A dataset of 368 CT scans, including orthognathic surgery patients, was automatically annotated with 3D cephalometric landmarks via a previously published deep learning-based method. Five of these landmarks were used to automatically construct an MSP orthogonal to the Frankfurt horizontal plane. The reliability of automatic MSP construction was compared with the reliability of manual MSP construction based on 6 manual localizations by 3 experienced operators on 19 randomly selected CT scans. The mandibular asymmetry of the 368 CT scans with respect to the MSP was calculated and compared with clinical expert judgment. The construction of the MSP was found to be highly reliable, both manually and automatically. The manual reproducibility 95% limit of agreement was less than 1 mm for -y translation and less than 1.1° for -x and -z rotation, and the automatic measurement lied within the confidence interval of the manual method. The automatic MSP construction was shown to be clinically relevant, with the mandibular asymmetry measures being consistent with the expertly assessed levels of asymmetry. The proposed automatic landmark-based MSP construction was found to be as reliable as manual construction and clinically relevant in assessing the mandibular asymmetry of 368 head CT scans. Once implemented in a clinical software, fully automated landmark-based MSP construction could be clinically used to assess mandibular asymmetry on head CT scans.

Methodological Challenges in Deep Learning-Based Detection of Intracranial Aneurysms: A Scoping Review.

Joo B

pubmed logopapersMay 26 2025
Artificial intelligence (AI), particularly deep learning, has demonstrated high diagnostic performance in detecting intracranial aneurysms on computed tomography angiography (CTA) and magnetic resonance angiography (MRA). However, the clinical translation of these technologies remains limited due to methodological limitations and concerns about generalizability. This scoping review comprehensively evaluates 36 studies that applied deep learning to intracranial aneurysm detection on CTA or MRA, focusing on study design, validation strategies, reporting practices, and reference standards. Key findings include inconsistent handling of ruptured and previously treated aneurysms, underreporting of coexisting brain or vascular abnormalities, limited use of external validation, and an almost complete absence of prospective study designs. Only a minority of studies employed diagnostic cohorts that reflect real-world aneurysm prevalence, and few reported all essential performance metrics, such as patient-wise and lesion-wise sensitivity, specificity, and false positives per case. These limitations suggest that current studies remain at the stage of technical validation, with high risks of bias and limited clinical applicability. To facilitate real-world implementation, future research must adopt more rigorous designs, representative and diverse validation cohorts, standardized reporting practices, and greater attention to human-AI interaction.

AI in Orthopedic Research: A Comprehensive Review.

Misir A, Yuce A

pubmed logopapersMay 26 2025
Artificial intelligence (AI) is revolutionizing orthopedic research and clinical practice by enhancing diagnostic accuracy, optimizing treatment strategies, and streamlining clinical workflows. Recent advances in deep learning have enabled the development of algorithms that detect fractures, grade osteoarthritis, and identify subtle pathologies in radiographic and magnetic resonance images with performance comparable to expert clinicians. These AI-driven systems reduce missed diagnoses and provide objective, reproducible assessments that facilitate early intervention and personalized treatment planning. Moreover, AI has made significant strides in predictive analytics by integrating diverse patient data-including gait and imaging features-to forecast surgical outcomes, implant survivorship, and rehabilitation trajectories. Emerging applications in robotics, augmented reality, digital twin technologies, and exoskeleton control promise to further transform preoperative planning and intraoperative guidance. Despite these promising developments, challenges such as data heterogeneity, algorithmic bias, and the "black box" nature of many models-as well as issues with robust validation-remain. This comprehensive review synthesizes current developments, critically examines limitations, and outlines future directions for integrating AI into musculoskeletal care.

Deep Learning for Pneumonia Diagnosis: A Custom CNN Approach with Superior Performance on Chest Radiographs

Mehta, A., Vyas, M.

medrxiv logopreprintMay 26 2025
A major global health and wellness issue causing major health problems and death, pneumonia underlines the need of quickly and precisely identifying and treating it. Though imaging technology has advanced, radiologists manual reading of chest X-rays still constitutes the basic method for pneumonia detection, which causes delays in both treatment and medical diagnosis. This study proposes a pneumonia detection method to automate the process using deep learning techniques. The concept employs a bespoke convolutional neural network (CNN) trained on different pneumonia-positive and pneumonia-negative cases from several healthcare providers. Various pre-processing steps were done on the chest radiographs to increase integrity and efficiency before teaching the design. Based on the comparison study with VGG19, ResNet50, InceptionV3, DenseNet201, and MobileNetV3, our bespoke CNN model was discovered to be the most efficient in balancing accuracy, recall, and parameter complexity. It shows 96.5% accuracy and 96.6% F1 score. This study contributes to the expansion of an automated, paired with a reliable, pneumonia finding system, which could improve personal outcomes and increase healthcare efficiency. The full project is available at here.

Predicting Surgical Versus Nonsurgical Management of Acute Isolated Distal Radius Fractures in Patients Under Age 60 Using a Convolutional Neural Network.

Hsu D, Persitz J, Noori A, Zhang H, Mashouri P, Shah R, Chan A, Madani A, Paul R

pubmed logopapersMay 26 2025
Distal radius fractures (DRFs) represent up to 20% of the fractures in the emergency department. Delays to surgery of more than 14 days are associated with poorer functional outcomes and increased health care utilization/costs. At our institution, the average time to surgery is more than 19 days because of the separation of surgical and nonsurgical care pathways and a lengthy referral process. To address this challenge, we aimed to create a convolutional neural network (CNN) capable of automating DRF x-ray analysis and triaging. We hypothesize that this model will accurately predict whether an acute isolated DRF fracture in a patient under the age of 60 years will be treated surgically or nonsurgically at our institution based on the radiographic input. We included 163 patients under the age of 60 years who presented to the emergency department between 2018 and 2023 with an acute isolated DRF and who were referred for clinical follow-up. Radiographs taken within 4 weeks of injury were collected in posterior-anterior and lateral views and then preprocessed for model training. The surgeons' decision to treat surgically or nonsurgically at our institution was the reference standard for assessing the model prediction accuracy. We included 723 radiographic posterior-anterior and lateral pairs (385 surgical and 338 nonsurgical) for model training. The best-performing model (seven CNN layers, one fully connected layer, an image input size of 256 × 256 pixels, and a 1.5× weighting for volarly displaced fractures) achieved 88% accuracy and 100% sensitivity. Values for true positive (100%), true negative (72.7%), false positive (27.3%), and false negative (0%) were calculated. After training based on institution-specific indications, a CNN-based algorithm can predict with 88% accuracy whether treatment of an acute isolated DRF in a patient under the age of 60 years will be treated surgically or nonsurgically. By promptly identifying patients who would benefit from expedited surgical treatment pathways, this model can reduce times for referral.
Page 280 of 3393381 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.