Sort by:
Page 58 of 1341340 results

PMFF-Net: A deep learning-based image classification model for UIP, NSIP, and OP.

Xu MW, Zhang ZH, Wang X, Li CT, Yang HY, Liao ZH, Zhang JQ

pubmed logopapersJun 19 2025
High-resolution computed tomography (HRCT) is helpful for diagnosing interstitial lung diseases (ILD), but it largely depends on the experience of physicians. Herein, our study aims to develop a deep-learning-based classification model to differentiate the three common types of ILD, so as to provide a reference to help physicians make the diagnosis and improve the accuracy of ILD diagnosis. Patients were selected from four tertiary Grade A hospitals in Kunming based on inclusion and exclusion criteria. HRCT scans of 130 patients were included. The imaging manifestations were usual interstitial pneumonia (UIP), non-specific interstitial pneumonia (NSIP), and organizing pneumonia (OP). Additionally, 50 chest HRCT cases without imaging abnormalities during the same period were selected.Construct a data set. Conduct the training, validation, and testing of the Parallel Multi-scale Feature Fusion Network (PMFF-Net) deep learning model. Utilize Python software to generate data and charts pertaining to model performance. Assess the model's accuracy, precision, recall, and F1-score, and juxtapose its diagnostic efficacy against that of physicians across various hospital levels, with differing levels of seniority, and from various departments. The PMFF -Net deep learning model is capable of classifying imaging types such as UIP, NSIP, and OP, as well as normal imaging. In a mere 105 s, it makes the diagnosis for 18 HRCT images with a diagnostic accuracy of 92.84 %, precision of 91.88 %, recall of 91.95 %, and an F1 score of 0.9171. The diagnostic accuracy of senior radiologists (83.33 %) and pulmonologists (77.77 %) from tertiary hospitals is higher than that of internists from secondary hospitals (33.33 %). Meanwhile, the diagnostic accuracy of middle-aged radiologists (61.11 %) and pulmonologists (66.66 %) are higher than junior radiologists (38.88 %) and pulmonologists (44.44 %) in tertiary hospitals, whereas junior and middle-aged internists at secondary hospitals were unable to complete the tests. This study found that the PMFF-Net model can effectively classify UIP, NSIP, OP imaging types, and normal imaging, which can help doctors of different hospital levels and departments make clinical decisions quickly and effectively.

Towards Classifying Histopathological Microscope Images as Time Series Data

Sungrae Hong, Hyeongmin Park, Youngsin Ko, Sol Lee, Bryan Wong, Mun Yong Yi

arxiv logopreprintJun 19 2025
As the frontline data for cancer diagnosis, microscopic pathology images are fundamental for providing patients with rapid and accurate treatment. However, despite their practical value, the deep learning community has largely overlooked their usage. This paper proposes a novel approach to classifying microscopy images as time series data, addressing the unique challenges posed by their manual acquisition and weakly labeled nature. The proposed method fits image sequences of varying lengths to a fixed-length target by leveraging Dynamic Time-series Warping (DTW). Attention-based pooling is employed to predict the class of the case simultaneously. We demonstrate the effectiveness of our approach by comparing performance with various baselines and showcasing the benefits of using various inference strategies in achieving stable and reliable results. Ablation studies further validate the contribution of each component. Our approach contributes to medical image analysis by not only embracing microscopic images but also lifting them to a trustworthy level of performance.

EchoFM: Foundation Model for Generalizable Echocardiogram Analysis.

Kim S, Jin P, Song S, Chen C, Li Y, Ren H, Li X, Liu T, Li Q

pubmed logopapersJun 18 2025
Echocardiography is the first-line noninvasive cardiac imaging modality, providing rich spatio-temporal information on cardiac anatomy and physiology. Recently, foundation model trained on extensive and diverse datasets has shown strong performance in various downstream tasks. However, translating foundation models into the medical imaging domain remains challenging due to domain differences between medical and natural images, the lack of diverse patient and disease datasets. In this paper, we introduce EchoFM, a general-purpose vision foundation model for echocardiography trained on a large-scale dataset of over 20 million echocardiographic images from 6,500 patients. To enable effective learning of rich spatio-temporal representations from periodic videos, we propose a novel self-supervised learning framework based on a masked autoencoder with a spatio-temporal consistent masking strategy and periodic-driven contrastive learning. The learned cardiac representations can be readily adapted and fine-tuned for a wide range of downstream tasks, serving as a strong and flexible backbone model. We validate EchoFM through experiments across key downstream tasks in the clinical echocardiography workflow, leveraging public and multi-center internal datasets. EchoFM consistently outperforms SOTA methods, demonstrating superior generalization capabilities and flexibility. The code and checkpoints are available at: https://github.com/SekeunKim/EchoFM.git.

Quality control system for patient positioning and filling in meta-information for chest X-ray examinations.

Borisov AA, Semenov SS, Kirpichev YS, Arzamasov KM, Omelyanskaya OV, Vladzymyrskyy AV, Vasilev YA

pubmed logopapersJun 18 2025
During radiography, irregularities occur, leading to decrease in the diagnostic value of the images obtained. The purpose of this work was to develop a system for automated quality assurance of patient positioning in chest radiographs, with detection of suboptimal contrast, brightness, and metadata errors. The quality assurance system was trained and tested using more than 69,000 X-rays of the chest and other anatomical areas from the Unified Radiological Information Service (URIS) and several open datasets. Our dataset included studies regardless of a patient's gender and race, while the sole exclusion criterion being age below 18 years. A training dataset of radiographs labeled by expert radiologists was used to train an ensemble of modified deep convolutional neural networks architectures ResNet152V2 and VGG19 to identify various quality deficiencies. Model performance was accessed using area under the receiver operating characteristic curve (ROC-AUC), precision, recall, F1-score, and accuracy metrics. Seven neural network models were trained to classify radiographs by the following quality deficiencies: failure to capture the target anatomic region, chest rotation, suboptimal brightness, incorrect anatomical area, projection errors, and improper photometric interpretation. All metrics for each model exceed 95%, indicating high predictive value. All models were combined into a unified system for evaluating radiograph quality. The processing time per image is approximately 3 s. The system supports multiple use cases: integration into an automated radiographic workstations, external quality assurance system for radiology departments, acquisition quality audits for municipal health systems, and routing of studies to diagnostic AI models.

Sex, stature, and age estimation from skull using computed tomography images: Current status, challenges, and future perspectives.

Du Z, Navic P, Mahakkanukrauh P

pubmed logopapersJun 18 2025
The skull has long been recognized and utilized in forensic investigations, evolving from basic to complex analyses with modern technologies. Advances in radiology and technology have enhanced the ability to analyze biological identifiers-sex, stature, and age at death-from the skull. The use of computed tomography imaging helps practitioners to improve the accuracy and reliability of forensic analyses. Recently, artificial intelligence has increasingly been applied in digital forensic investigations to estimate sex, stature, and age from computed tomography images. The integration of artificial intelligence represents a significant shift in multidisciplinary collaboration, offering the potential for more accurate and reliable identification, along with advancements in academia. However, it is not yet fully developed for routine forensic work, as it remains largely in the research and development phase. Additionally, the limitations of artificial intelligence systems, such as the lack of transparency in algorithms, accountability for errors, and the potential for discrimination, must still be carefully considered. Based on scientific publications from the past decade, this article aims to provide an overview of the application of computed tomography imaging in estimating sex, stature, and age from the skull and to address issues related to future directions to further improvement.

Identification, characterisation and outcomes of pre-atrial fibrillation in heart failure with reduced ejection fraction.

Helbitz A, Nadarajah R, Mu L, Larvin H, Ismail H, Wahab A, Thompson P, Harrison P, Harris M, Joseph T, Plein S, Petrie M, Metra M, Wu J, Swoboda P, Gale CP

pubmed logopapersJun 18 2025
Atrial fibrillation (AF) in heart failure with reduced ejection fraction (HFrEF) has prognostic implications. Using a machine learning algorithm (FIND-AF), we aimed to explore clinical events and the cardiac magnetic resonance (CMR) characteristics of the pre-AF phenotype in HFrEF. A cohort of individuals aged ≥18 years with HFrEF without AF from the MATCH 1 and MATCH 2 studies (2018-2024) stratified by FIND-AF score. All received cardiac magnetic resonance using Cvi42 software for volumetric and T1/T2. The primary outcome was time to a composite of MACE inclusive of heart failure hospitalisation, myocardial infarction, stroke and all-cause mortality. Secondary outcomes included the association between CMR findings and FIND-AF score. Of 385 patients [mean age 61.7 (12.6) years, 39.0% women] with a median 2.5 years follow-up, the primary outcome occurred in 58 (30.2%) patients in the high FIND-AF risk group and 23 (11.9%) in the low FIND-AF risk group (hazard ratio 3.25, 95% CI 2.00-5.28, P < 0.001). Higher FIND-AF score was associated with higher indexed left ventricular mass (β = 4.7, 95% CI 0.5-8.9), indexed left atrial volume (β = 5.9, 95% CI 2.2-9.6), higher indexed left ventricular end-diastolic volume (β = 9.55, 95% CI 1.37-17.74, P = 0.022), native T1 signal (β = 18.0, 95% CI 7.0-29.1) and extracellular volume (β = 1.6, 95% CI 0.6-2.5). A pre-AF HFrEF subgroup with distinct CMR characteristics and poor prognosis may be identified, potentially guiding interventions to reduce clinical events.

NERO: Explainable Out-of-Distribution Detection with Neuron-level Relevance

Anju Chhetri, Jari Korhonen, Prashnna Gyawali, Binod Bhattarai

arxiv logopreprintJun 18 2025
Ensuring reliability is paramount in deep learning, particularly within the domain of medical imaging, where diagnostic decisions often hinge on model outputs. The capacity to separate out-of-distribution (OOD) samples has proven to be a valuable indicator of a model's reliability in research. In medical imaging, this is especially critical, as identifying OOD inputs can help flag potential anomalies that might otherwise go undetected. While many OOD detection methods rely on feature or logit space representations, recent works suggest these approaches may not fully capture OOD diversity. To address this, we propose a novel OOD scoring mechanism, called NERO, that leverages neuron-level relevance at the feature layer. Specifically, we cluster neuron-level relevance for each in-distribution (ID) class to form representative centroids and introduce a relevance distance metric to quantify a new sample's deviation from these centroids, enhancing OOD separability. Additionally, we refine performance by incorporating scaled relevance in the bias term and combining feature norms. Our framework also enables explainable OOD detection. We validate its effectiveness across multiple deep learning architectures on the gastrointestinal imaging benchmarks Kvasir and GastroVision, achieving improvements over state-of-the-art OOD detection methods.

Brain Stroke Classification Using Wavelet Transform and MLP Neural Networks on DWI MRI Images

Mana Mohammadi, Amirhesam Jafari Rad, Ashkan Behrouzi

arxiv logopreprintJun 18 2025
This paper presents a lightweight framework for classifying brain stroke types from Diffusion-Weighted Imaging (DWI) MRI scans, employing a Multi-Layer Perceptron (MLP) neural network with Wavelet Transform for feature extraction. Accurate and timely stroke detection is critical for effective treatment and improved patient outcomes in neuroimaging. While Convolutional Neural Networks (CNNs) are widely used for medical image analysis, their computational complexity often hinders deployment in resource-constrained clinical settings. In contrast, our approach combines Wavelet Transform with a compact MLP to achieve efficient and accurate stroke classification. Using the "Brain Stroke MRI Images" dataset, our method yields classification accuracies of 82.0% with the "db4" wavelet (level 3 decomposition) and 86.00% with the "Haar" wavelet (level 2 decomposition). This analysis highlights a balance between diagnostic accuracy and computational efficiency, offering a practical solution for automated stroke diagnosis. Future research will focus on enhancing model robustness and integrating additional MRI modalities for comprehensive stroke assessment.

Classification of Multi-Parametric Body MRI Series Using Deep Learning

Boah Kim, Tejas Sudharshan Mathai, Kimberly Helm, Peter A. Pinto, Ronald M. Summers

arxiv logopreprintJun 18 2025
Multi-parametric magnetic resonance imaging (mpMRI) exams have various series types acquired with different imaging protocols. The DICOM headers of these series often have incorrect information due to the sheer diversity of protocols and occasional technologist errors. To address this, we present a deep learning-based classification model to classify 8 different body mpMRI series types so that radiologists read the exams efficiently. Using mpMRI data from various institutions, multiple deep learning-based classifiers of ResNet, EfficientNet, and DenseNet are trained to classify 8 different MRI series, and their performance is compared. Then, the best-performing classifier is identified, and its classification capability under the setting of different training data quantities is studied. Also, the model is evaluated on the out-of-training-distribution datasets. Moreover, the model is trained using mpMRI exams obtained from different scanners in two training strategies, and its performance is tested. Experimental results show that the DenseNet-121 model achieves the highest F1-score and accuracy of 0.966 and 0.972 over the other classification models with p-value$<$0.05. The model shows greater than 0.95 accuracy when trained with over 729 studies of the training data, whose performance improves as the training data quantities grew larger. On the external data with the DLDS and CPTAC-UCEC datasets, the model yields 0.872 and 0.810 accuracy for each. These results indicate that in both the internal and external datasets, the DenseNet-121 model attains high accuracy for the task of classifying 8 body MRI series types.

Automated Multi-grade Brain Tumor Classification Using Adaptive Hierarchical Optimized Horse Herd BiLSTM Fusion Network in MRI Images.

Thanya T, Jeslin T

pubmed logopapersJun 18 2025
Brain tumor classification using Magnetic Resonance Imaging (MRI) images is an important and emerging field of medical imaging and artificial intelligence in the current world. With advancements in technology, particularly in deep learning and machine learning, researchers and clinicians are leveraging these tools to create complex models that, using MRI data, can reliably detect and classify tumors in the brain. However, it has a number of drawbacks, including the intricacy of tumor types and grades, intensity variations in MRI data and tumors varying in severity. This paper proposes a Multi-Grade Hierarchical Classification Network Model (MGHCN) for the hierarchical classification of tumor grades in MRI images. The model's distinctive feature lies in its ability to categorize tumors into multiple grades, thereby capturing the hierarchical nature of tumor severity. To address variations in intensity levels across different MRI samples, an Improved Adaptive Intensity Normalization (IAIN) pre-processing step is employed. This step standardizes intensity values, effectively mitigating the impact of intensity variations and ensuring more consistent analyses. The model renders utilization of the Dual Tree Complex Wavelet Transform with Enhanced Trigonometric Features (DTCWT-ETF) for efficient feature extraction. DTCWT-ETF captures both spatial and frequency characteristics, allowing the model to distinguish between different tumor types more effectively. In the classification stage, the framework introduces the Adaptive Hierarchical Optimized Horse Herd BiLSTM Fusion Network (AHOHH-BiLSTM). This multi-grade classification model is designed with a comprehensive architecture, including distinct layers that enhance the learning process and adaptively refine parameters. The purpose of this study is to improve the precision of distinguishing different grades of tumors in MRI images. To evaluate the proposed MGHCN framework, a set of evaluation metrics is incorporated which includes precision, recall, and the F1-score. The structure employs BraTS Challenge 2021, Br35H, and BraTS Challenge 2023 datasets, a significant combination that ensures comprehensive training and evaluation. The MGHCN framework aims to enhance brain tumor classification in MRI images by utilizing these datasets along with a comprehensive set of evaluation metrics, providing a more thorough and sophisticated understanding of its capabilities and performance.
Page 58 of 1341340 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.