Sort by:
Page 158 of 6486473 results

Bashyam VM, Erus G, Cui Y, Wu D, Hwang G, Getka A, Singh A, Aidinis G, Baik K, Melhem R, Mamourian E, Doshi J, Davison A, Nasrallah IM, Davatzikos C

pubmed logopapersSep 17 2025
<i>"Just Accepted" papers have undergone full peer review and have been accepted for publication in <i>Radiology: Artificial Intelligence</i>. This article will undergo copyediting, layout, and proof review before it is published in its final version. Please note that during production of the final copyedited article, errors may be discovered which could affect the content</i>. Purpose To introduce an open-source deep learning brain segmentation model for fully automated brain MRI segmentation, enabling rapid segmentation and facilitating large-scale neuroimaging research. Materials and Methods In this retrospective study, a deep learning model was developed using a diverse training dataset of 1900 MRI scans (ages 24-93 with a mean of 65 years (SD: 11.5 years) and 1007 females and 893 males) with reference labels generated using a multiatlas segmentation method with human supervision. The final model was validated using 71391 scans from 14 studies. Segmentation quality was assessed using Dice similarity and Pearson correlation coefficients with reference segmentations. Downstream predictive performance for brain age and Alzheimer's disease was evaluated by fitting machine learning models. Statistical significance was assessed using Mann-Whittney U and McNemar's tests. Results The DLMUSE model achieved high correlation (r = 0.93-0.95) and agreement (median Dice scores = 0.84-0.89) with reference segmentations across the testing dataset. Prediction of brain age using DLMUSE features achieved a mean absolute error of 5.08 years, similar to that of the reference method (5.15 years, <i>P</i> = .56). Classification of Alzheimer's disease using DLMUSE features achieved an accuracy of 89% and F1-score of 0.80, which were comparable to values achieved by the reference method (89% and 0.79, respectively). DLMUSE segmentation speed was over 10000 times faster than that of the reference method (3.5 seconds vs 14 hours). Conclusion DLMUSE enabled rapid brain MRI segmentation, with performance comparable to that of state-of-theart methods across diverse datasets. The resulting open-source tools and user-friendly web interface can facilitate large-scale neuroimaging research and wide utilization of advanced segmentation methods. ©RSNA, 2025.

Silva-Sousa AC, Dos Santos Cardoso G, Branco AC, Küchler EC, Baratto-Filho F, Candemil AP, Sousa-Neto MD, de Araujo CM

pubmed logopapersSep 17 2025
The aim of this study was to assess measurements of the maxillary canines using Cone Beam Computed Tomography (CBCT) and develop a machine learning model for sex estimation. CBCT scans from 610 patients were screened. The maxillary canines were examined to measure total tooth length, average enamel thickness, and mesiodistal width. Various supervised machine learning algorithms were employed to construct predictive models, including Decision Tree, Gradient Boosting Classifier, K-Nearest Neighbors (KNN), Logistic Regression, Multi-Layer Perceptron (MLP), Random Forest Classifier, Support Vector Machine (SVM), XGBoost, LightGBM, and CatBoost. Validation of each model was performed using a 10-fold cross-validation approach. Metrics such as area under the curve (AUC), accuracy, recall, precision, and F1 Score were computed, with ROC curves generated for visualization. The total length of the tooth proved to be the variable with the highest predictive power. The algorithms that demonstrated superior performance in terms of AUC were LightGBM and Logistic Regression, achieving AUC values of 0.77 [CI95% = 0.65-0.89] and 0.75 [CI95% = 0.62-0.86] for the test data, and 0.74 [CI95% = 0.70-0.80] and 0.75 [CI95% = 0.70-0.79] in cross-validation, respectively. Both models also showed high precision values. The use of maxillary canine measurements, combined with supervised machine learning techniques, has proven to be viable for sex estimation. The machine learning approach combined with is a low-cost option as it relies solely on a single anatomical structure.

Kaniz Fatema, Emad A. Mohammed, Sukhjit Singh Sehra

arxiv logopreprintSep 17 2025
Effective and interpretable classification of medical images is a challenge in computer-aided diagnosis, especially in resource-limited clinical settings. This study introduces spline-based Kolmogorov-Arnold Networks (KANs) for accurate medical image classification with limited, diverse datasets. The models include SBTAYLOR-KAN, integrating B-splines with Taylor series; SBRBF-KAN, combining B-splines with Radial Basis Functions; and SBWAVELET-KAN, embedding B-splines in Morlet wavelet transforms. These approaches leverage spline-based function approximation to capture both local and global nonlinearities. The models were evaluated on brain MRI, chest X-rays, tuberculosis X-rays, and skin lesion images without preprocessing, demonstrating the ability to learn directly from raw data. Extensive experiments, including cross-dataset validation and data reduction analysis, showed strong generalization and stability. SBTAYLOR-KAN achieved up to 98.93% accuracy, with a balanced F1-score, maintaining over 86% accuracy using only 30% of the training data across three datasets. Despite class imbalance in the skin cancer dataset, experiments on both imbalanced and balanced versions showed SBTAYLOR-KAN outperforming other models, achieving 68.22% accuracy. Unlike traditional CNNs, which require millions of parameters (e.g., ResNet50 with 24.18M), SBTAYLOR-KAN achieves comparable performance with just 2,872 trainable parameters, making it more suitable for constrained medical environments. Gradient-weighted Class Activation Mapping (Grad-CAM) was used for interpretability, highlighting relevant regions in medical images. This framework provides a lightweight, interpretable, and generalizable solution for medical image classification, addressing the challenges of limited datasets and data-scarce scenarios in clinical AI applications.

Chu Chen, Ander Biguri, Jean-Michel Morel, Raymond H. Chan, Carola-Bibiane Schönlieb, Jizhou Li

arxiv logopreprintSep 17 2025
X-ray Computed Laminography (CL) is essential for non-destructive inspection of plate-like structures in applications such as microchips and composite battery materials, where traditional computed tomography (CT) struggles due to geometric constraints. However, reconstructing high-quality volumes from laminographic projections remains challenging, particularly under highly sparse-view acquisition conditions. In this paper, we propose a reconstruction algorithm, namely LamiGauss, that combines Gaussian Splatting radiative rasterization with a dedicated detector-to-world transformation model incorporating the laminographic tilt angle. LamiGauss leverages an initialization strategy that explicitly filters out common laminographic artifacts from the preliminary reconstruction, preventing redundant Gaussians from being allocated to false structures and thereby concentrating model capacity on representing the genuine object. Our approach effectively optimizes directly from sparse projections, enabling accurate and efficient reconstruction with limited data. Extensive experiments on both synthetic and real datasets demonstrate the effectiveness and superiority of the proposed method over existing techniques. LamiGauss uses only 3$\%$ of full views to achieve superior performance over the iterative method optimized on a full dataset.

Puru Vaish, Felix Meister, Tobias Heimann, Christoph Brune, Jelmer M. Wolterink

arxiv logopreprintSep 17 2025
Many recent approaches in representation learning implicitly assume that uncorrelated views of a data point are sufficient to learn meaningful representations for various downstream tasks. In this work, we challenge this assumption and demonstrate that meaningful structure in the latent space does not emerge naturally. Instead, it must be explicitly induced. We propose a method that aligns representations from different views of the data to align complementary information without inducing false positives. Our experiments show that our proposed self-supervised learning method, Consistent View Alignment, improves performance for downstream tasks, highlighting the critical role of structured view alignment in learning effective representations. Our method achieved first and second place in the MICCAI 2025 SSL3D challenge when using a Primus vision transformer and ResEnc convolutional neural network, respectively. The code and pretrained model weights are released at https://github.com/Tenbatsu24/LatentCampus.

Yue He, Min Liu, Qinghao Liu, Jiazheng Wang, Yaonan Wang, Hang Zhang, Xiang Chen

arxiv logopreprintSep 17 2025
Image registration is a fundamental task in medical image analysis. Deformations are often closely related to the morphological characteristics of tissues, making accurate feature extraction crucial. Recent weakly supervised methods improve registration by incorporating anatomical priors such as segmentation masks or landmarks, either as inputs or in the loss function. However, such weak labels are often not readily available, limiting their practical use. Motivated by the strong representation learning ability of visual foundation models, this paper introduces SAMIR, an efficient medical image registration framework that utilizes the Segment Anything Model (SAM) to enhance feature extraction. SAM is pretrained on large-scale natural image datasets and can learn robust, general-purpose visual representations. Rather than using raw input images, we design a task-specific adaptation pipeline using SAM's image encoder to extract structure-aware feature embeddings, enabling more accurate modeling of anatomical consistency and deformation patterns. We further design a lightweight 3D head to refine features within the embedding space, adapting to local deformations in medical images. Additionally, we introduce a Hierarchical Feature Consistency Loss to guide coarse-to-fine feature matching and improve anatomical alignment. Extensive experiments demonstrate that SAMIR significantly outperforms state-of-the-art methods on benchmark datasets for both intra-subject cardiac image registration and inter-subject abdomen CT image registration, achieving performance improvements of 2.68% on ACDC and 6.44% on the abdomen dataset. The source code will be publicly available on GitHub following the acceptance of this paper.

Hammonds SK, Eftestøl T, Kurz KD, Fernandez-Quilez A

pubmed logopapersSep 17 2025
Alzheimer's disease (AD) is a neurodegenerative condition and the most common form of dementia. Recent developments in AD treatment call for robust diagnostic tools to facilitate medical decision-making. Despite progress for early diagnostic tests, there remains uncertainty about clinical use. Structural magnetic resonance imaging (MRI), as a readily available imaging tool in the current AD diagnostic pathway, in combination with artificial intelligence, offers opportunities of added value beyond symptomatic evaluation. However, MRI studies in AD tend to suffer from small datasets and consequently limited generalizability. Although ensemble models take advantage of the strengths of several models to improve performance and generalizability, there is little knowledge of how the different ensemble models compare performance-wise and the relationship between detection performance and model calibration. The latter is especially relevant for clinical translatability. In our study, we applied three ensemble decision strategies with three different deep learning architectures for multi-class AD detection with structural MRI. For two of the three architectures, the weighted average was the best decision strategy in terms of balanced accuracy and calibration error. In contrast to the base models, the results of the ensemble models showed that the best detection performance corresponded to the lowest calibration error, independent of the architecture. For each architecture, the best ensemble model reduced the estimated calibration error compared to the base model average from (1) 0.174±0.01 to 0.164±0.04, (2) 0.182±0.02 to 0.141±0.04, and (3) 0.269±0.08 to 0.240±0.04 and increased the balanced accuracy from (1) 0.527±0.05 to 0.608±0.06, (2) 0.417±0.03 to 0.456±0.04, and (3) 0.348±0.02 to 0.371±0.03.

Wu Z, Liu D, Ouyang S, Hu J, Ding J, Guo Q, Gao J, Luo J, Ren K

pubmed logopapersSep 17 2025
We developed a deep learning radiomics nomogram (DLRN) using CT scans to improve clinical decision-making and risk stratification for early recurrence of hepatocellular carcinoma (HCC) after transplantation, which typically has a poor prognosis. In this two-center study, 245 HCC patients who had contrast-enhanced CT before liver transplantation were split into a training set (n = 184) and a validation set (n = 61). We extracted radiomics and deep learning features from tumor and peritumor areas on preoperative CT images. The DLRN was created by combining these features with significant clinical variables using multivariate logistic regression. Its performance was validated against four traditional risk criteria to assess its additional value. The DLRN model showed strong predictive accuracy for early HCC recurrence post-transplant, with AUCs of 0.884 and 0.829 in training and validation groups. High DLRN scores significantly increased relapse risk by 16.370 times (95% CI: 7.100-31.690; p  < 0.001). Combining DLRN with Metro-Ticket 2.0 criteria yielded the best prediction (AUC: training/validation: 0.936/0.863). The CT-based DLRN offers a non-invasive method for predicting early recurrence following liver transplantation in patients with HCC. Furthermore, it provides substantial additional predictive value with traditional prognostic scoring systems. AI-driven predictive models utilizing preoperative CT imaging enable accurate identification of early HCC recurrence risk following liver transplantation, facilitating risk-stratified surveillance protocols and optimized post-transplant management. A CT-based DLRN for predicting early HCC recurrence post-transplant was developed. The DLRN predicted recurrence with high accuracy (AUC: 0.829) and 16.370-fold increased recurrence risk. Combining DLRN with Metro-Ticket 2.0 criteria achieved optimal prediction (AUC: 0.863).

Huang KC, Lin CE, Lin DS, Lin TT, Wu CK, Jeng GS, Lin LY, Lin LC

pubmed logopapersSep 17 2025
The adoption of left ventricular global longitudinal strain (LVGLS) is still restricted by variability among various vendors and observers, despite advancements from tissue Doppler to speckle tracking imaging, machine learning, and, more recently, convolutional neural network (CNN)-based segmentation strain analysis. While CNNs have enabled fully automated strain measurement, they are inherently constrained by restricted receptive fields and a lack of temporal consistency. Transformer-based networks have emerged as a powerful alternative in medical imaging, offering enhanced global attention. Among these, the Video Swin Transformer (V-SwinT) architecture, with its 3D-shifted windows and locality inductive bias, is particularly well suited for ultrasound imaging, providing temporal consistency while optimizing computational efficiency. In this study, we propose the DTHR-SegStrain model based on a V-SwinT backbone. This model incorporates contour regression and utilizes an FCN-style multiscale feature fusion. As a result, it can generate accurate and temporally consistent left ventricle (LV) contours, allowing for direct calculation of myocardial strain without the need for conversion from segmentation to contours or any additional postprocessing. Compared to EchoNet-dynamic and Unity-GLS, DTHR-SegStrain showed greater efficiency, reliability, and validity in LVGLS measurements. Furthermore, the hybridization experiments assessed the interaction between segmentation models and strain algorithms, reinforcing that consistent segmentation contours over time can simplify strain calculations and decrease measurement variability. These findings emphasize the potential of V-SwinT-based frameworks to enhance the standardization and clinical applicability of LVGLS assessments.

Olbrich L, Larsson L, Dodd PJ, Palmer M, Nguyen MHTN, d'Elbée M, Hesseling AC, Heinrich N, Zar HJ, Ntinginya NE, Khosa C, Nliwasa M, Verghese V, Bonnet M, Wobudeya E, Nduna B, Moh R, Mwanga J, Mustapha A, Breton G, Taguebue JV, Borand L, Marcy O, Chabala C, Seddon J, van der Zalm MM

pubmed logopapersSep 17 2025
In 2022, the WHO conditionally recommended the use of treatment decision algorithms (TDAs) for treatment decision-making in children <10 years with presumptive tuberculosis (TB), aiming to decrease the substantial case detection gap and improve treatment access in high TB-incidence settings. WHO also called for external validation of these TDAs. Within the Decide-TB project (PACT ID: PACTR202407866544155, 23 July 2024), we aim to generate an individual-participant dataset (IPD) from prospective TB diagnostic accuracy cohorts (RaPaed-TB, UMOYA and two cohorts from TB-Speed). Using the IPD, we aim to: (1) assess the diagnostic accuracy of published TDAs using a set of consensus case definitions produced by the National Institute of Health as reference standard (confirmed and unconfirmed vs unlikely TB); (2) evaluate the added value of novel tools (including biomarkers and artificial intelligence-interpreted radiology) in the existing TDAs; (3) generate an artificial population, modelling the target population of children eligible for WHO-endorsed TDAs presenting at primary and secondary healthcare levels and assess the diagnostic accuracy of published TDAs and (4) identify clinical predictors of radiological disease severity in children from the study population of children with presumptive TB. This study will externally validate the first data-driven WHO TDAs in a large, well-characterised and diverse paediatric IPD derived from four large paediatric cohorts of children investigated for TB. The study has received ethical clearance for sharing secondary deidentified data from the ethics committees of the parent studies (RaPaed-TB, UMOYA and TB Speed) and as the aims of this study were part of the parent studies' protocols, a separate approval was not necessary. Study findings will be published in peer-reviewed journals and disseminated at local, regional and international scientific meetings and conferences. This database will serve as a catalyst for the assessment of the inclusion of novel tools and the generation of an artificial population to simulate the impact of novel diagnostic pathways for TB in children at lower levels of healthcare. TDAs have the potential to close the diagnostic gap in childhood TB. Further finetuning of the currently available algorithms will facilitate this and improve access to care.
Page 158 of 6486473 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.