Sort by:
Page 115 of 2052045 results

Detection of COVID-19, lung opacity, and viral pneumonia via X-ray using machine learning and deep learning.

Lamouadene H, El Kassaoui M, El Yadari M, El Kenz A, Benyoussef A, El Moutaouakil A, Mounkachi O

pubmed logopapersJun 1 2025
The COVID-19 pandemic has significantly strained healthcare systems, highlighting the need for early diagnosis to isolate positive cases and prevent the spread. This study combines machine learning, deep learning, and transfer learning techniques to automatically diagnose COVID-19 and other pulmonary conditions from radiographic images. First, we used Convolutional Neural Networks (CNNs) and a Support Vector Machine (SVM) classifier on a dataset of 21,165 chest X-ray images. Our model achieved an accuracy of 86.18 %. This approach aids medical experts in rapidly and accurateky detecting lung diseases. Next, we applied transfer learning using ResNet18 combined with SVM on a dataset comprising normal, COVID-19, lung opacity, and viral pneumonia images. This model outperformed traditional methods, with classification rates of 98 % with Stochastic Gradient Descent (SGD), 97 % with Adam, 96 % with RMSProp, and 94 % with Adagrad optimizers. Additionally, we incorporated two additional transfer learning models, EfficientNet-CNN and Xception-CNN, which achieved classification accuracies of 99.20 % and 98.80 %, respectively. However, we observed limitations in dataset diversity and representativeness, which may affect model generalization. Future work will focus on implementing advanced data augmentation techniques and collaborations with medical experts to enhance model performance.This research demonstrates the potential of cutting-edge deep learning techniques to improve diagnostic accuracy and efficiency in medical imaging applications.

Enhancing radiomics features via a large language model for classifying benign and malignant breast tumors in mammography.

Ra S, Kim J, Na I, Ko ES, Park H

pubmed logopapersJun 1 2025
Radiomics is widely used to assist in clinical decision-making, disease diagnosis, and treatment planning for various target organs, including the breast. Recent advances in large language models (LLMs) have helped enhance radiomics analysis. Herein, we sought to improve radiomics analysis by incorporating LLM-learned clinical knowledge, to classify benign and malignant tumors in breast mammography. We extracted radiomics features from the mammograms based on the region of interest and retained the features related to the target task. Using prompt engineering, we devised an input sequence that reflected the selected features and the target task. The input sequence was fed to the chosen LLM (LLaMA variant), which was fine-tuned using low-rank adaptation to enhance radiomics features. This was then evaluated on two mammogram datasets (VinDr-Mammo and INbreast) against conventional baselines. The enhanced radiomics-based method performed better than baselines using conventional radiomics features tested on two mammogram datasets, achieving accuracies of 0.671 for the VinDr-Mammo dataset and 0.839 for the INbreast dataset. Conventional radiomics models require retraining from scratch for an unseen dataset using a new set of features. In contrast, the model developed in this study effectively reused the common features between the training and unseen datasets by explicitly linking feature names with feature values, leading to extensible learning across datasets. Our method performed better than the baseline method in this retraining setting using an unseen dataset. Our method, one of the first to incorporate LLM into radiomics, has the potential to improve radiomics analysis.

Generative adversarial networks in medical image reconstruction: A systematic literature review.

Hussain J, Båth M, Ivarsson J

pubmed logopapersJun 1 2025
Recent advancements in generative adversarial networks (GANs) have demonstrated substantial potential in medical image processing. Despite this progress, reconstructing images from incomplete data remains a challenge, impacting image quality. This systematic literature review explores the use of GANs in enhancing and reconstructing medical imaging data. A document survey of computing literature was conducted using the ACM Digital Library to identify relevant articles from journals and conference proceedings using keyword combinations, such as "generative adversarial networks or generative adversarial network," "medical image or medical imaging," and "image reconstruction." Across the reviewed articles, there were 122 datasets used in 175 instances, 89 top metrics employed 335 times, 10 different tasks with a total count of 173, 31 distinct organs featured in 119 instances, and 18 modalities utilized in 121 instances, collectively depicting significant utilization of GANs in medical imaging. The adaptability and efficacy of GANs were showcased across diverse medical tasks, organs, and modalities, utilizing top public as well as private/synthetic datasets for disease diagnosis, including the identification of conditions like cancer in different anatomical regions. The study emphasized GAN's increasing integration and adaptability in diverse radiology modalities, showcasing their transformative impact on diagnostic techniques, including cross-modality tasks. The intricate interplay between network size, batch size, and loss function refinement significantly impacts GAN's performance, although challenges in training persist. The study underscores GANs as dynamic tools shaping medical imaging, contributing significantly to image quality, training methodologies, and overall medical advancements, positioning them as substantial components driving medical advancements.

The impact of Alzheimer's disease on cortical complexity and its underlying biological mechanisms.

Chen L, Zhou X, Qiao Y, Wang Y, Zhou Z, Jia S, Sun Y, Peng D

pubmed logopapersJun 1 2025
Alzheimer's disease (AD) might impact the complexity of cerebral cortex, and the underlying biological mechanisms responsible for cortical changes in the AD cortex remain unclear. Fifty-eight participants with AD and 67 normal controls underwent high-resolution 3 T structural brain MRI. Using surface-based morphometry (SBM), we created vertex-wise maps for group comparisons in terms of five measures: cortical thickness, fractal dimension, gyrification index, Toro's gyrification index and sulcal depth respectively. Five machine learning (ML) models combining SBM parameters were established to predict AD. In addition, transcription-neuroimaging association analyses, as well as Mendelian randomization of AD and cortical thickness data, were conducted to investigate the genetic mechanisms and biological functions of AD. AD patients exhibited topological changes in cortical complexity, with increased complexity in the frontal and temporal cortex and decreased complexity in the insula cortex, alongside extensive cortical atrophy. Combining different SBM measures could aid disease diagnosis. The genes involved in cell structure support and the immune response were the strongest contributors to cortical anatomical features in AD patients. The identified genes associated with AD cortical morphology were overexpressed or underexpressed in excitatory neurons, oligodendrocytes, and astrocytes. Complexity alterations of the cerebral surface may be associated with a range of biological processes and molecular mechanisms, including immune responses. The present findings may contribute to a more comprehensive understanding of brain morphological patterns in AD patients.

GAN-based synthetic FDG PET images from T1 brain MRI can serve to improve performance of deep unsupervised anomaly detection models.

Zotova D, Pinon N, Trombetta R, Bouet R, Jung J, Lartizien C

pubmed logopapersJun 1 2025
Research in the cross-modal medical image translation domain has been very productive over the past few years in tackling the scarce availability of large curated multi-modality datasets with the promising performance of GAN-based architectures. However, only a few of these studies assessed task-based related performance of these synthetic data, especially for the training of deep models. We design and compare different GAN-based frameworks for generating synthetic brain[18F]fluorodeoxyglucose (FDG) PET images from T1 weighted MRI data. We first perform standard qualitative and quantitative visual quality evaluation. Then, we explore further impact of using these fake PET data in the training of a deep unsupervised anomaly detection (UAD) model designed to detect subtle epilepsy lesions in T1 MRI and FDG PET images. We introduce novel diagnostic task-oriented quality metrics of the synthetic FDG PET data tailored to our unsupervised detection task, then use these fake data to train a use case UAD model combining a deep representation learning based on siamese autoencoders with a OC-SVM density support estimation model. This model is trained on normal subjects only and allows the detection of any variation from the pattern of the normal population. We compare the detection performance of models trained on 35 paired real MR T1 of normal subjects paired either on 35 true PET images or on 35 synthetic PET images generated from the best performing generative models. Performance analysis is conducted on 17 exams of epilepsy patients undergoing surgery. The best performing GAN-based models allow generating realistic fake PET images of control subject with SSIM and PSNR values around 0.9 and 23.8, respectively and in distribution (ID) with regard to the true control dataset. The best UAD model trained on these synthetic normative PET data allows reaching 74% sensitivity. Our results confirm that GAN-based models are the best suited for MR T1 to FDG PET translation, outperforming transformer or diffusion models. We also demonstrate the diagnostic value of these synthetic data for the training of UAD models and evaluation on clinical exams of epilepsy patients. Our code and the normative image dataset are available.

Accuracy of a deep neural network for automated pulmonary embolism detection on dedicated CT pulmonary angiograms.

Zsarnoczay E, Rapaka S, Schoepf UJ, Gnasso C, Vecsey-Nagy M, Todoran TM, Hagar MT, Kravchenko D, Tremamunno G, Griffith JP, Fink N, Derrick S, Bowman M, Sam H, Tiller M, Godoy K, Condrea F, Sharma P, O'Doherty J, Maurovich-Horvat P, Emrich T, Varga-Szemes A

pubmed logopapersJun 1 2025
To assess the performance of a Deep Neural Network (DNN)-based prototype algorithm for automated PE detection on CTPA scans. Patients who had previously undergone CTPA with three different systems (SOMATOM Force, go.Top, and Definition AS; Siemens Healthineers, Forchheim, Germany) because of suspected PE from September 2022 to January 2023 were retrospectively enrolled in this study (n = 1,000, 58.8 % women). For detailed evaluation, all PE were divided into three location-based subgroups: central arteries, lobar branches, and peripheral regions. Clinical reports served as ground truth. Sensitivity, specificity, positive predictive value (PPV), negative predictive value (NPV), and accuracy were determined to evaluate the performance of DNN-based PE detection. Cases were excluded due to incomplete data (n = 32), inconclusive report (n = 17), insufficient contrast detected in the pulmonary trunk (n = 40), or failure of the preprocessing algorithms (n = 8). Therefore, the final cohort included 903 cases with a PE prevalence of 12 % (n = 110). The model achieved a sensitivity, specificity, PPV, and NPV of 84.6, 95.1, 70.5, and 97.8 %, respectively, and delivered an overall accuracy of 93.8 %. Among the false positive cases (n = 39), common sources of error included lung masses, pneumonia, and contrast flow artifacts. Common sources of false negatives (n = 17) included chronic and subsegmental PEs. The proposed DNN-based algorithm provides excellent performance for the detection of PE, suggesting its potential utility to support radiologists in clinical reading and exam prioritization.

Image normalization techniques and their effect on the robustness and predictive power of breast MRI radiomics.

Schwarzhans F, George G, Escudero Sanchez L, Zaric O, Abraham JE, Woitek R, Hatamikia S

pubmed logopapersJun 1 2025
Radiomics analysis has emerged as a promising approach to aid in cancer diagnosis and treatment. However, radiomics research currently lacks standardization, and radiomics features can be highly dependent on acquisition and pre-processing techniques used. In this study, we aim to investigate the effect of various image normalization techniques on robustness of radiomics features extracted from breast cancer patient MRI scans. MRI scans from the publicly available MAMA-MIA dataset and an internal breast MRI test set depicting triple negative breast cancer (TNBC) were used. We compared the effect of commonly used image normalization techniques on radiomics feature robustnessusing Concordance-Correlation-Coefficient (CCC) between multiple combinations of normalization approaches. We also trained machine learning-based prediction models of pathologic complete response (pCR) on radiomics after different normalization techniques were used and compared their areas under the receiver operating characteristic curve (ROC-AUC). For predicting complete pathological response from pre-treatment breast cancer MRI radiomics, the highest overall ROC-AUC was achieved by using a combination of three different normalization techniques indicating their potentially powerful role when working with heterogeneous imaging data. The effect of normalization was more pronounced with smaller training data and normalization may be less important with increasing abundance of training data. Additionally, we observed considerable differences between MRI data sets and their feature robustness towards normalization. Overall, we were able to demonstrate the importance of selecting and standardizing normalization methods for accurate and reliable radiomics analysis in breast MRI scans especially with small training data sets.

Scatter and beam hardening effect corrections in pelvic region cone beam CT images using a convolutional neural network.

Yagi S, Usui K, Ogawa K

pubmed logopapersJun 1 2025
The aim of this study is to remove scattered photons and beam hardening effect in cone beam CT (CBCT) images and make an image available for treatment planning. To remove scattered photons and beam hardening effect, a convolutional neural network (CNN) was used, and trained with distorted projection data including scattered photons and beam hardening effect and supervised projection data calculated with monochromatic X-rays. The number of training projection data was 17,280 with data augmentation and that of test projection data was 540. The performance of the CNN was investigated in terms of the number of photons in the projection data used in the training of the network. Projection data of pelvic CBCT images (32 cases) were calculated with a Monte Carlo simulation with six different count levels ranging from 0.5 to 3 million counts/pixel. For the evaluation of corrected images, the peak signal-to-noise ratio (PSNR), the structural similarity index measure (SSIM), and the sum of absolute difference (SAD) were used. The results of simulations showed that the CNN could effectively remove scattered photons and beam hardening effect, and the PSNR, the SSIM, and the SAD significantly improved. It was also found that the number of photons in the training projection data was important in correction accuracy. Furthermore, a CNN model trained with projection data with a sufficient number of photons could yield good performance even though a small number of photons were used in the input projection data.

Multi-class brain malignant tumor diagnosis in magnetic resonance imaging using convolutional neural networks.

Lv J, Wu L, Hong C, Wang H, Wu Z, Chen H, Liu Z

pubmed logopapersJun 1 2025
Glioblastoma (GBM), primary central nervous system lymphoma (PCNSL), and brain metastases (BM) are common malignant brain tumors with similar radiological features, while the accurate and non-invasive dialgnosis is essential for selecting appropriate treatment plans. This study develops a deep learning model, FoTNet, to improve the automatic diagnosis accuracy of these tumors, particularly for the relatively rare PCNSL tumor. The model integrates a frequency-based channel attention layer and the focal loss to address the class imbalance issue caused by the limited samples of PCNSL. A multi-center MRI dataset was constructed by collecting and integrating data from Sir Run Run Shaw Hospital, along with public datasets from UPENN and TCGA. The dataset includes T1-weighted contrast-enhanced (T1-CE) MRI images from 58 GBM, 82 PCNSL, and 269 BM cases, which were divided into training and testing sets with a 5:2 ratio. FoTNet achieved a classification accuracy of 92.5 % and an average AUC of 0.9754 on the test set, significantly outperforming existing machine learning and deep learning methods in distinguishing among GBM, PCNSL, and BM. Through multiple validations, FoTNet has proven to be an effective and robust tool for accurately classifying these brain tumors, providing strong support for preoperative diagnosis and assisting clinicians in making more informed treatment decisions.

Combating Medical Label Noise through more precise partition-correction and progressive hard-enhanced learning.

Zhang S, Chu S, Qiang Y, Zhao J, Wang Y, Wei X

pubmed logopapersJun 1 2025
Computer-aided diagnosis systems based on deep neural networks heavily rely on datasets with high-quality labels. However, manual annotation for lesion diagnosis relies on image features, often requiring professional experience and complex image analysis process. This inevitably introduces noisy labels, which can misguide the training of classification models. Our goal is to design an effective method to address the challenges posed by label noise in medical images. we propose a novel noise-tolerant medical image classification framework consisting of two phases: fore-training correction and progressive hard-sample enhanced learning. In the first phase, we design a dual-branch sample partition detection scheme that effectively classifies each instance into one of three subsets: clean, hard, or noisy. Simultaneously, we propose a hard-sample label refinement strategy based on class prototypes with confidence-perception weighting and an effective joint correction method for noisy samples, enabling the acquisition of higher-quality training data. In the second phase, we design a progressive hard-sample reinforcement learning method to enhance the model's ability to learn discriminative feature representations. This approach accounts for sample difficulty and mitigates the effects of label noise in medical datasets. Our framework achieves an accuracy of 82.39% on the pneumoconiosis dataset collected by our laboratory. On a five-class skin disease dataset with six different levels of label noise (0, 0.05, 0.1, 0.2, 0.3, and 0.4), the average accuracy over the last ten epochs reaches 88.51%, 86.64%, 85.02%, 83.01%, 81.95%, 77.89%, respectively; For binary polyp classification under noise rates of 0.2, 0.3, and 0.4, the average accuracy over the last ten epochs is 97.90%, 93.77%, 89.33%, respectively. The effectiveness of our proposed framework is demonstrated through its performance on three challenging datasets with both real and synthetic noise. Experimental results further demonstrate the robustness of our method across varying noise rates.
Page 115 of 2052045 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.