Sort by:
Page 148 of 6486473 results

Goetz-Fu M, Haller M, Collins T, Begusic N, Jochum F, Keeza Y, Uwineza J, Marescaux J, Weingertner AS, Sananès N, Hostettler A

pubmed logopapersSep 22 2025
The objective was to develop an artificial intelligence (AI)-based system, using deep neural network (DNN) technology, to automatically detect standard fetal planes during video capture, measure fetal biometry parameters and estimate fetal weight. A standard plane recognition DNN was trained to classify ultrasound images into four categories: head circumference (HC), abdominal circumference (AC), femur length (FL) standard planes, or 'other'. The recognized standard plane images were subsequently processed by three fetal biometry DNNs, automatically measuring HC, AC and FL. Fetal weight was then estimated with the Hadlock 3 formula. The training dataset consisted of 16,626 images. A prospective temporal validation was then conducted using an independent set of 281 ultrasound videos of healthy fetuses. Fetal weight and biometry measurements were compared against an expert sonographer. Two less experienced sonographers were used as controls. The AI system obtained a significantly lower absolute relative measurement error in fetal weight estimation than the controls (AI vs. medium-level: p = 0.032, AI vs. beginner: p < 1e-8), so in AC measurements (AI vs. medium-level: p = 1.72e-04, AI vs. beginner: p < 1e-06). Average absolute relative measurement errors of AI versus expert were: 0.96 % (S.D. 0.79 %) for HC, 1.56 % (S.D. 1.39 %) for AC, 1.77 % (S.D. 1.46 %) for FL and 3.10 % (S.D. 2.74 %) for fetal weight estimation. The AI system produced similar biometry measurements and fetal weight estimation to those of the expert sonographer. It is a promising tool to enhance non-expert sonographers' performance and reproducibility in fetal biometry measurements, and to reduce inter-operator variability.

Aksoy S, Demircioglu P, Bogrekci I

pubmed logopapersSep 22 2025
<b>Background/Objectives:</b> Multiple sclerosis (MS) is a chronic demyelinating disease where early identification of patients at risk of conversion from clinically isolated syndrome (CIS) to clinically definite MS remains a critical unmet clinical need. Existing machine learning approaches often lack interpretability, limiting clinical trust and adoption. The objective of this research was to develop a novel two-stage machine learning framework with comprehensive explainability to predict CIS-to-MS conversion while addressing demographic bias and interpretability limitations. <b>Methods:</b> A cohort of 177 CIS patients from the National Institute of Neurology and Neurosurgery in Mexico City was analyzed using SeruNet-MS, a two-stage framework that separates demographic baseline risk from clinical risk modification. Stage 1 applied logistic regression to demographic features, while Stage 2 incorporated 25 clinical and symptom features, including MRI lesions, cerebrospinal fluid biomarkers, electrophysiological tests, and symptom characteristics. Patient-level interpretability was achieved through SHAP (SHapley Additive exPlanations) analysis, providing transparent attribution of each factor's contribution to risk assessment. <b>Results:</b> The two-stage model achieved a ROC-AUC of 0.909, accuracy of 0.806, precision of 0.842, and recall of 0.800, outperforming baseline machine learning methods. Cross-validation confirmed stable performance (0.838 ± 0.095 AUC) with appropriate generalization. SHAP analysis identified periventricular lesions, oligoclonal bands, and symptom complexity as the strongest predictors, with clinical examples illustrating transparent patient-specific risk communication. <b>Conclusions:</b> The two-stage approach effectively mitigates demographic bias by separating non-modifiable factors from actionable clinical findings. SHAP explanations provide clinicians with clear, individualized insights into prediction drivers, enhancing trust and supporting decision making. This framework demonstrates that high predictive performance can be achieved without sacrificing interpretability, representing a significant step forward for explainable AI in MS risk stratification and real-world clinical adoption.

Ohashi Y, Shimizu T, Koyano H, Nakamura Y, Takahashi D, Yamada K, Iwasaki N

pubmed logopapersSep 22 2025
Ultrasound examination using the Graf method is widely applied for early detection of developmental dysplasia of the hip (DDH), but intra- and inter-operator variability remains a limitation. This study aimed to quantify operator variability in hip ultrasound assessments and to validate an AI-assisted system for automated α-angle measurement to improve reproducibility. Thirty participants of different experience levels, including trained clinicians, residents, and medical students, each performed six ultrasound scans on a standardized infant hip phantom. Examination time, iliac margin inclination, and α-angle measurements were analyzed to assess intra- and inter-operator variability. In parallel, an AI-based system was developed to automatically detect anatomical landmarks and calculate α-angles from static images and dynamic video sequences. Validation was conducted using the phantom model with a known α-angle of 70°. Clinicians achieved shorter examination times and higher reproducibility than residents and students, with manual measurements systematically underestimating the reference α-angle. Static AI produced closer estimates with greater variability, whereas dynamic AI achieved the highest accuracy (mean 69.2°) and consistency with narrower limits of agreement than manual measurements. These findings confirm substantial operator variability and demonstrate that AI-assisted dynamic ultrasound analysis can improve reproducibility and reliability in routine DDH screening.

Sun P, Zhang C, Yang Z, Yin FF, Liu M

pubmed logopapersSep 22 2025
In image-guided radiation therapy (IGRT), deformable image registration between computed tomography (CT) and cone beam computed tomography (CBCT) images remain challenging due to the computational cost of iterative algorithms and the data dependence of supervised deep learning methods. Implicit neural representation (INR) provides a promising alternative, but conventional multilayer perceptron (MLP) might struggle to efficiently represent complex, nonlinear deformations. This study introduces a novel INR-based registration framework that models the deformation as a continuous, time-varying velocity field, parameterized by a Kolmogorov-Arnold Network (KAN) constructed using Jacobi polynomials. To our knowledge, this is the first integration of KAN into medical image registration, establishing a new paradigm beyond standard MLP-based INR. For improved efficiency, the KAN estimates low-dimensional principal components of the velocity field, which are reconstructed via inverse principal component analysis and temporally integrated to derive the final deformation. This approach achieves a ~70% improvement in computational efficiency relative to direct velocity field modeling while ensuring smooth and topology-preserving transformations through velocity regularization. Evaluation on a publicly available pelvic CT-CBCT dataset demonstrates up to 6% improvement in registration accuracy over traditional iterative methods and ~3% over MLP-based INR baselines, indicating the potential of the proposed method as an efficient and generalizable alternative for deformable registration.

Rahi, A., Shafiabadi, M. H.

medrxiv logopreprintSep 22 2025
Brain metastases represent one of the most common intracranial malignancies, yet early and accurate detection remains challenging, particularly in clinical datasets with limited availability of healthy controls. In this study, we developed a feature-based machine learning framework to classify patients with and without brain metastases using multi-modal clinical MRI scans. A dataset of 50 subjects from the UCSF Brain Metastases collection was analyzed, including pre- and post-contrast T1-weighted images and corresponding segmentation masks. We designed advanced feature extraction strategies capturing intensity, enhancement patterns, texture gradients, and histogram-based metrics, resulting in 44 quantitative descriptors per subject. To address the severe class imbalance (46 metastasis vs. 4 non-metastasis cases), we applied minority oversampling and noise-based augmentation, combined with stratified cross-validation. Among multiple classifiers, Random Forest consistently achieved the highest performance with an average accuracy of 96.7% and an area under the ROC curve (AUC) of 0.99 across five folds. The proposed approach highlights the potential of handcrafted radiomic-like features coupled with machine learning to improve metastasis detection in heterogeneous clinical MRI cohorts. These findings underscore the importance of methodological strategies for handling imbalanced data and support the integration of feature-based models as complementary tools for brain metastasis screening and research.

Beni HM, Asaei FY

pubmed logopapersSep 21 2025
Breast cancer is one of the most important causes of death among women due to cancer. With the early diagnosis of this condition, the probability of survival will increase. For this purpose, medical imaging methods, especially mammography, are used for screening and early diagnosis of breast abnormalities. The main goal of this study is to distinguish benign or malignant tumors based on tumor morphology features extracted from tumor outlines extracted from mammography images. Unlike previous studies, this study does not use the mammographic image itself but only extracts the exact outline of the tumor. These outlines were extracted from a new and publicly available mammography database published in 2024. The features outlines were calculated using known pre-trained Convolutional Neural Networks (CNN), including VGG16, ResNet50, Xception65, AlexNet, DenseNet, GoogLeNet, Inception-v3, and a combination of them to improve performance. These pre-trained networks have been used in many studies in various fields. In the classification part, known Machine Learning (ML) algorithms, such as Support Vector Machine (SVM), K-Nearest Neighbor (KNN), Neural Network (NN), Naïve Bayes (NB), Decision Tree (DT), and a combination of them have been compared in outcome measures, namely accuracy, specificity, sensitivity, and precision. Also, with the use of data augmentation, the dataset size was increased about 6-8 times, and the K-fold cross-validation technique (K = 5) was used in this study. Based on the performed simulations, a combination of the features from all pre-trained deep networks and the NB classifier resulted in the best possible outcomes with 88.13 % accuracy, 92.52 % specificity, 83.73 % sensitivity, and 92.04 % precision. Furthermore, validation on DMID dataset using ResNet50 features along with NB classifier, led to 92.03 % accuracy, 95.57 % specificity, 88.49 % sensitivity, and 95.23 % precision. This study sheds light on using AI algorithms to prevent biopsy tests and speed up breast cancer tumor classification using tumor outlines in mammographic images.

Li S, Liu H, Li W, Gao X

pubmed logopapersSep 21 2025
This systematic review evaluates advanced ultrasound quantitative techniques including contrast-enhanced ultrasound, elastography, quantitative ultrasound (QUS), multiparametric ultrasound, and artificial intelligence for characterizing focal liver lesions (FLLs). It critically appraises their technical principles, parameter extraction methodologies, and clinical validation frameworks. It further integrates and comparatively analyzes their diagnostic performance across major FLL subtypes, including hepatocellular carcinoma, metastases, hemangioma, and focal nodular hyperplasia. This work provides a foundation for improving noninvasive FLL diagnosis and highlights the imperative for standardization and clinical translation of advanced QUS in hepatology.

Zihan Liang, Ziwen Pan, Ruoxuan Xiong

arxiv logopreprintSep 21 2025
Clinical notes contain rich patient information, such as diagnoses or medications, making them valuable for patient representation learning. Recent advances in large language models have further improved the ability to extract meaningful representations from clinical texts. However, clinical notes are often missing. For example, in our analysis of the MIMIC-IV dataset, 24.5% of patients have no available discharge summaries. In such cases, representations can be learned from other modalities such as structured data, chest X-rays, or radiology reports. Yet the availability of these modalities is influenced by clinical decision-making and varies across patients, resulting in modality missing-not-at-random (MMNAR) patterns. We propose a causal representation learning framework that leverages observed data and informative missingness in multimodal clinical records. It consists of: (1) an MMNAR-aware modality fusion component that integrates structured data, imaging, and text while conditioning on missingness patterns to capture patient health and clinician-driven assignment; (2) a modality reconstruction component with contrastive learning to ensure semantic sufficiency in representation learning; and (3) a multitask outcome prediction model with a rectifier that corrects for residual bias from specific modality observation patterns. Comprehensive evaluations across MIMIC-IV and eICU show consistent gains over the strongest baselines, achieving up to 13.8% AUC improvement for hospital readmission and 13.1% for ICU admission.

Yuzhu Li, An Sui, Fuping Wu, Xiahai Zhuang

arxiv logopreprintSep 21 2025
Uncertainty estimation has been widely studied in medical image segmentation as a tool to provide reliability, particularly in deep learning approaches. However, previous methods generally lack effective supervision in uncertainty estimation, leading to low interpretability and robustness of the predictions. In this work, we propose a self-supervised approach to guide the learning of uncertainty. Specifically, we introduce three principles about the relationships between the uncertainty and the image gradients around boundaries and noise. Based on these principles, two uncertainty supervision losses are designed. These losses enhance the alignment between model predictions and human interpretation. Accordingly, we introduce novel quantitative metrics for evaluating the interpretability and robustness of uncertainty. Experimental results demonstrate that compared to state-of-the-art approaches, the proposed method can achieve competitive segmentation performance and superior results in out-of-distribution (OOD) scenarios while significantly improving the interpretability and robustness of uncertainty estimation. Code is available via https://github.com/suiannaius/SURE.

Raisa Amiruddin, Nikolay Y. Yordanov, Nazanin Maleki, Pascal Fehringer, Athanasios Gkampenis, Anastasia Janas, Kiril Krantchev, Ahmed Moawad, Fabian Umeh, Salma Abosabie, Sara Abosabie, Albara Alotaibi, Mohamed Ghonim, Mohanad Ghonim, Sedra Abou Ali Mhana, Nathan Page, Marko Jakovljevic, Yasaman Sharifi, Prisha Bhatia, Amirreza Manteghinejad, Melisa Guelen, Michael Veronesi, Virginia Hill, Tiffany So, Mark Krycia, Bojan Petrovic, Fatima Memon, Justin Cramer, Elizabeth Schrickel, Vilma Kosovic, Lorenna Vidal, Gerard Thompson, Ichiro Ikuta, Basimah Albalooshy, Ali Nabavizadeh, Nourel Hoda Tahon, Karuna Shekdar, Aashim Bhatia, Claudia Kirsch, Gennaro D'Anna, Philipp Lohmann, Amal Saleh Nour, Andriy Myronenko, Adam Goldman-Yassen, Janet R. Reid, Sanjay Aneja, Spyridon Bakas, Mariam Aboian

arxiv logopreprintSep 21 2025
High-quality reference standard image data creation by neuroradiology experts for automated clinical tools can be a powerful tool for neuroradiology & artificial intelligence education. We developed a multimodal educational approach for students and trainees during the MICCAI Brain Tumor Segmentation Lighthouse Challenge 2025, a landmark initiative to develop accurate brain tumor segmentation algorithms. Fifty-six medical students & radiology trainees volunteered to annotate brain tumor MR images for the BraTS challenges of 2023 & 2024, guided by faculty-led didactics on neuropathology MRI. Among the 56 annotators, 14 select volunteers were then paired with neuroradiology faculty for guided one-on-one annotation sessions for BraTS 2025. Lectures on neuroanatomy, pathology & AI, journal clubs & data scientist-led workshops were organized online. Annotators & audience members completed surveys on their perceived knowledge before & after annotations & lectures respectively. Fourteen coordinators, each paired with a neuroradiologist, completed the data annotation process, averaging 1322.9+/-760.7 hours per dataset per pair and 1200 segmentations in total. On a scale of 1-10, annotation coordinators reported significant increase in familiarity with image segmentation software pre- and post-annotation, moving from initial average of 6+/-2.9 to final average of 8.9+/-1.1, and significant increase in familiarity with brain tumor features pre- and post-annotation, moving from initial average of 6.2+/-2.4 to final average of 8.1+/-1.2. We demonstrate an innovative offering for providing neuroradiology & AI education through an image segmentation challenge to enhance understanding of algorithm development, reinforce the concept of data reference standard, and diversify opportunities for AI-driven image analysis among future physicians.
Page 148 of 6486473 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.