Sort by:
Page 184 of 2352345 results

Image normalization techniques and their effect on the robustness and predictive power of breast MRI radiomics.

Schwarzhans F, George G, Escudero Sanchez L, Zaric O, Abraham JE, Woitek R, Hatamikia S

pubmed logopapersJun 1 2025
Radiomics analysis has emerged as a promising approach to aid in cancer diagnosis and treatment. However, radiomics research currently lacks standardization, and radiomics features can be highly dependent on acquisition and pre-processing techniques used. In this study, we aim to investigate the effect of various image normalization techniques on robustness of radiomics features extracted from breast cancer patient MRI scans. MRI scans from the publicly available MAMA-MIA dataset and an internal breast MRI test set depicting triple negative breast cancer (TNBC) were used. We compared the effect of commonly used image normalization techniques on radiomics feature robustnessusing Concordance-Correlation-Coefficient (CCC) between multiple combinations of normalization approaches. We also trained machine learning-based prediction models of pathologic complete response (pCR) on radiomics after different normalization techniques were used and compared their areas under the receiver operating characteristic curve (ROC-AUC). For predicting complete pathological response from pre-treatment breast cancer MRI radiomics, the highest overall ROC-AUC was achieved by using a combination of three different normalization techniques indicating their potentially powerful role when working with heterogeneous imaging data. The effect of normalization was more pronounced with smaller training data and normalization may be less important with increasing abundance of training data. Additionally, we observed considerable differences between MRI data sets and their feature robustness towards normalization. Overall, we were able to demonstrate the importance of selecting and standardizing normalization methods for accurate and reliable radiomics analysis in breast MRI scans especially with small training data sets.

Multi-class brain malignant tumor diagnosis in magnetic resonance imaging using convolutional neural networks.

Lv J, Wu L, Hong C, Wang H, Wu Z, Chen H, Liu Z

pubmed logopapersJun 1 2025
Glioblastoma (GBM), primary central nervous system lymphoma (PCNSL), and brain metastases (BM) are common malignant brain tumors with similar radiological features, while the accurate and non-invasive dialgnosis is essential for selecting appropriate treatment plans. This study develops a deep learning model, FoTNet, to improve the automatic diagnosis accuracy of these tumors, particularly for the relatively rare PCNSL tumor. The model integrates a frequency-based channel attention layer and the focal loss to address the class imbalance issue caused by the limited samples of PCNSL. A multi-center MRI dataset was constructed by collecting and integrating data from Sir Run Run Shaw Hospital, along with public datasets from UPENN and TCGA. The dataset includes T1-weighted contrast-enhanced (T1-CE) MRI images from 58 GBM, 82 PCNSL, and 269 BM cases, which were divided into training and testing sets with a 5:2 ratio. FoTNet achieved a classification accuracy of 92.5 % and an average AUC of 0.9754 on the test set, significantly outperforming existing machine learning and deep learning methods in distinguishing among GBM, PCNSL, and BM. Through multiple validations, FoTNet has proven to be an effective and robust tool for accurately classifying these brain tumors, providing strong support for preoperative diagnosis and assisting clinicians in making more informed treatment decisions.

Combating Medical Label Noise through more precise partition-correction and progressive hard-enhanced learning.

Zhang S, Chu S, Qiang Y, Zhao J, Wang Y, Wei X

pubmed logopapersJun 1 2025
Computer-aided diagnosis systems based on deep neural networks heavily rely on datasets with high-quality labels. However, manual annotation for lesion diagnosis relies on image features, often requiring professional experience and complex image analysis process. This inevitably introduces noisy labels, which can misguide the training of classification models. Our goal is to design an effective method to address the challenges posed by label noise in medical images. we propose a novel noise-tolerant medical image classification framework consisting of two phases: fore-training correction and progressive hard-sample enhanced learning. In the first phase, we design a dual-branch sample partition detection scheme that effectively classifies each instance into one of three subsets: clean, hard, or noisy. Simultaneously, we propose a hard-sample label refinement strategy based on class prototypes with confidence-perception weighting and an effective joint correction method for noisy samples, enabling the acquisition of higher-quality training data. In the second phase, we design a progressive hard-sample reinforcement learning method to enhance the model's ability to learn discriminative feature representations. This approach accounts for sample difficulty and mitigates the effects of label noise in medical datasets. Our framework achieves an accuracy of 82.39% on the pneumoconiosis dataset collected by our laboratory. On a five-class skin disease dataset with six different levels of label noise (0, 0.05, 0.1, 0.2, 0.3, and 0.4), the average accuracy over the last ten epochs reaches 88.51%, 86.64%, 85.02%, 83.01%, 81.95%, 77.89%, respectively; For binary polyp classification under noise rates of 0.2, 0.3, and 0.4, the average accuracy over the last ten epochs is 97.90%, 93.77%, 89.33%, respectively. The effectiveness of our proposed framework is demonstrated through its performance on three challenging datasets with both real and synthetic noise. Experimental results further demonstrate the robustness of our method across varying noise rates.

Predicting hepatocellular carcinoma response to TACE: A machine learning study based on 2.5D CT imaging and deep features analysis.

Lin C, Cao T, Tang M, Pu W, Lei P

pubmed logopapersJun 1 2025
Prior to the commencement of treatment, it is essential to establish an objective method for accurately predicting the prognosis of patients with hepatocellular carcinoma (HCC) undergoing transarterial chemoembolization (TACE). In this study, we aimed to develop a machine learning (ML) model to predict the response of HCC patients to TACE based on CT images analysis. Public dataset from The Cancer Imaging Archive (TCIA), uploaded in August 2022, comprised a total of 105 cases, including 68 males and 37 females. The external testing dataset was collected from March 1, 2019 to July 1, 2022, consisting of total of 26 patients who underwent TACE treatment at our institution and were followed up for at least 3 months after TACE, including 22 males and 4 females. The public dataset was utilized for ResNet50 transfer learning and ML model construction, while the external testing dataset was used for model performance evaluation. All CT images with the largest lesions in axial, sagittal, and coronal orientations were selected to construct 2.5D images. Pre-trained ResNet50 weights were adapted through transfer learning to serve as a feature extractor to derive deep features for building ML models. Model performance was assessed using area under the curve (AUC), accuracy, F1-Score, confusion matrix analysis, decision curves, and calibration curves. The AUC values for the external testing dataset were 0.90, 0.90, 0.91, and 0.89 for random forest classifier (RFC), support vector classifier (SVC), logistic regression (LR), and extreme gradient boosting (XGB), respectively. The accuracy values for the external testing dataset were 0.79, 0.81, 0.80, and 0.80 for RFC, SVC, LR, and XGB, respectively. The F1-score values for the external testing dataset were 0.75, 0.77, 0.78, and 0.79 for RFC, SVC, LR, and XGB, respectively. The ML model constructed using deep features from 2.5D images has the potential to be applied in predicting the prognosis of HCC patients following TACE treatment.

Machine Learning Models in the Detection of MB2 Canal Orifice in CBCT Images.

Shetty S, Yuvali M, Ozsahin I, Al-Bayatti S, Narasimhan S, Alsaegh M, Al-Daghestani H, Shetty R, Castelino R, David LR, Ozsahin DU

pubmed logopapersJun 1 2025
The objective of the present study was to determine the accuracy of machine learning (ML) models in the detection of mesiobuccal (MB2) canals in axial cone-beam computed tomography (CBCT) sections. A total of 2500 CBCT scans from the oral radiology department of University Dental Hospital, Sharjah were screened to obtain 277 high-resolution, small field-of-view CBCT scans with maxillary molars. Among the 277 scans, 160 of them showed the presence of MB2 orifice and the rest (117) did not. Two-dimensional axial images of these scans were then cropped. The images were classified and labelled as N (absence of MB2) and M (presence of MB2) by 2 examiners. The images were embedded using Google's Inception V3 and transferred to the ML classification model. Six different ML models (logistic regression [LR], naïve Bayes [NB], support vector machine [SVM], K-nearest neighbours [Knn], random forest [RF], neural network [NN]) were then tested on their ability to classify the images into M and N. The classification metrics (area under curve [AUC], accuracy, F1-score, precision) of the models were assessed in 3 steps. NN (0.896), LR (0.893), and SVM (0.886) showed the highest values of AUC with specified target variables (steps 2 and 3). The highest accuracy was exhibited by LR (0.849) and NN (0.848) with specified target variables. The highest precision (86.8%) and recall (92.5%) was observed with the SVM model. The success rates (AUC, precision, recall) of ML algorithms in the detection of MB2 were remarkable in our study. It was also observed that when the target variable was specified, significant success rates such as 86.8% in precision and 92.5% in recall were achieved. The present study showed promising results in the ML-based detection of MB2 canal using axial CBCT slices.

The integration of artificial intelligence into clinical medicine: Trends, challenges, and future directions.

Aravazhi PS, Gunasekaran P, Benjamin NZY, Thai A, Chandrasekar KK, Kolanu ND, Prajjwal P, Tekuru Y, Brito LV, Inban P

pubmed logopapersJun 1 2025
AI has emerged as a transformative force in clinical medicine, changing the diagnosis, treatment, and management of patients. Tools have been derived for working with ML, DL, and NLP algorithms to analyze large complex medical datasets with unprecedented accuracy and speed, thereby improving diagnostic precision, treatment personalization, and patient care outcomes. For example, CNNs have dramatically improved the accuracy of medical imaging diagnoses, and NLP algorithms have greatly helped extract insights from unstructured data, including EHRs. However, there are still numerous challenges that face AI integration into clinical workflows, including data privacy, algorithmic bias, ethical dilemmas, and problems with the interpretability of "black-box" AI models. These barriers have thus far prevented the widespread application of AI in health care, and its possible trends, obstacles, and future implications are necessary to be systematically explored. The purpose of this paper is, therefore, to assess the current trends in AI applications in clinical medicine, identify those obstacles that are hindering adoption, and identify possible future directions. This research hopes to synthesize evidence from other peer-reviewed articles to provide a more comprehensive understanding of the role that AI plays to advance clinical practices, improve patient outcomes, or enhance decision-making. A systematic review was done according to the PRISMA guidelines to explore the integration of Artificial Intelligence in clinical medicine, including trends, challenges, and future directions. PubMed, Cochrane Library, Web of Science, and Scopus databases were searched for peer-reviewed articles from 2014 to 2024 with keywords such as "Artificial Intelligence in Medicine," "AI in Clinical Practice," "Machine Learning in Healthcare," and "Ethical Implications of AI in Medicine." Studies focusing on AI application in diagnostics, treatment planning, and patient care reporting measurable clinical outcomes were included. Non-clinical AI applications and articles published before 2014 were excluded. Selected studies were screened for relevance, and then their quality was critically appraised to synthesize data reliably and rigorously. This systematic review includes the findings of 8 studies that pointed out the transformational role of AI in clinical medicine. AI tools, such as CNNs, had diagnostic accuracy more than the traditional methods, particularly in radiology and pathology. Predictive models efficiently supported risk stratification, early disease detection, and personalized medicine. Despite these improvements, significant hurdles, including data privacy, algorithmic bias, and resistance from clinicians regarding the "black-box" nature of AI, had yet to be surmounted. XAI has emerged as an attractive solution that offers the promise to enhance interpretability and trust. As a whole, AI appeared promising in enhancing diagnostics, treatment personalization, and clinical workflows by dealing with systemic inefficiencies. The transformation potential of AI in clinical medicine can transform diagnostics, treatment strategies, and efficiency. Overcoming obstacles such as concerns about data privacy, the danger of algorithmic bias, and difficulties with interpretability may pave the way for broader use and facilitate improvement in patient outcomes while transforming clinical workflows to bring sustainability into healthcare delivery.

A radiomics model combining machine learning and neural networks for high-accuracy prediction of cervical lymph node metastasis on ultrasound of head and neck squamous cell carcinoma.

Fukuda M, Eida S, Katayama I, Takagi Y, Sasaki M, Sumi M, Ariji Y

pubmed logopapersJun 1 2025
This study aimed to develop an ultrasound image-based radiomics model for diagnosing cervical lymph node (LN) metastasis in patients with head and neck squamous cell carcinoma (HNSCC) that shows higher accuracy than previous models. A total of 537 LN (260 metastatic and 277 nonmetastatic) from 126 patients (78 men, 48 women, average age 63 years) were enrolled. The multivariate analysis software Prediction One (Sony Network Communications Corporation) was used to create the diagnostic models. Furthermore, three machine learning methods were adopted as comparison approaches. Based on a combination of texture analysis results, clinical information, and ultrasound findings interpretated by specialists, a total of 12 models were created, three for each machine learning method, and their diagnostic performance was compared. The three best models had area under the curve of 0.98. Parameters related to ultrasound findings, such as presence of a hilum, echogenicity, and granular parenchymal echoes, showed particularly high contributions. Other significant contributors were those from texture analysis that indicated the minimum pixel value, number of contiguous pixels with the same echogenicity, and uniformity of gray levels. The radiomics model developed was able to accurately diagnose cervical LN metastasis in HNSCC.

PET and CT based DenseNet outperforms advanced deep learning models for outcome prediction of oropharyngeal cancer.

Ma B, Guo J, Dijk LVV, Langendijk JA, Ooijen PMAV, Both S, Sijtsema NM

pubmed logopapersJun 1 2025
In the HECKTOR 2022 challenge set [1], several state-of-the-art (SOTA, achieving best performance) deep learning models were introduced for predicting recurrence-free period (RFP) in head and neck cancer patients using PET and CT images. This study investigates whether a conventional DenseNet architecture, with optimized numbers of layers and image-fusion strategies, could achieve comparable performance as SOTA models. The HECKTOR 2022 dataset comprises 489 oropharyngeal cancer (OPC) patients from seven distinct centers. It was randomly divided into a training set (n = 369) and an independent test set (n = 120). Furthermore, an additional dataset of 400 OPC patients, who underwent chemo(radiotherapy) at our center, was employed for external testing. Each patients' data included pre-treatment CT- and PET-scans, manually generated GTV (Gross tumour volume) contours for primary tumors and lymph nodes, and RFP information. The present study compared the performance of DenseNet against three SOTA models developed on the HECKTOR 2022 dataset. When inputting CT, PET and GTV using the early fusion (considering them as different channels of input) approach, DenseNet81 (with 81 layers) obtained an internal test C-index of 0.69, a performance metric comparable with SOTA models. Notably, the removal of GTV from the input data yielded the same internal test C-index of 0.69 while improving the external test C-index from 0.59 to 0.63. Furthermore, compared to PET-only models, when utilizing the late fusion (concatenation of extracted features) with CT and PET, DenseNet81 demonstrated superior C-index values of 0.68 and 0.66 in both internal and external test sets, while using early fusion was better in only the internal test set. The basic DenseNet architecture with 81 layers demonstrated a predictive performance on par with SOTA models featuring more intricate architectures in the internal test set, and better performance in the external test. The late fusion of CT and PET imaging data yielded superior performance in the external test.

Beyond traditional orthopaedic data analysis: AI, multimodal models and continuous monitoring.

Oettl FC, Zsidai B, Oeding JF, Hirschmann MT, Feldt R, Tischer T, Samuelsson K

pubmed logopapersJun 1 2025
Multimodal artificial intelligence (AI) has the potential to revolutionise healthcare by enabling the simultaneous processing and integration of various data types, including medical imaging, electronic health records, genomic information and real-time data. This review explores the current applications and future potential of multimodal AI across healthcare, with a particular focus on orthopaedic surgery. In presurgical planning, multimodal AI has demonstrated significant improvements in diagnostic accuracy and risk prediction, with studies reporting an Area under the receiving operator curve presenting good to excellent performance across various orthopaedic conditions. Intraoperative applications leverage advanced imaging and tracking technologies to enhance surgical precision, while postoperative care has been advanced through continuous patient monitoring and early detection of complications. Despite these advances, significant challenges remain in data integration, standardisation, and privacy protection. Technical solutions such as federated learning (allowing decentralisation of models) and edge computing (allowing data analysis to happen on site or closer to site instead of multipurpose datacenters) are being developed to address these concerns while maintaining compliance with regulatory frameworks. As this field continues to evolve, the integration of multimodal AI promises to advance personalised medicine, improve patient outcomes, and transform healthcare delivery through more comprehensive and nuanced analysis of patient data. Level of Evidence: Level V.

CT-derived fractional flow reserve on therapeutic management and outcomes compared with coronary CT angiography in coronary artery disease.

Qian Y, Chen M, Hu C, Wang X

pubmed logopapersJun 1 2025
To determine the value of on-site deep learning-based CT-derived fractional flow reserve (CT-FFR) for therapeutic management and adverse clinical outcomes in patients suspected of coronary artery disease (CAD) compared with coronary CT angiography (CCTA) alone. This single-centre prospective study included consecutive patients suspected of CAD between June 2021 and September 2021 at our hospital. Four hundred and sixty-one patients were randomized into either CT-FFR+CCTA or CCTA-alone group. The first endpoint was the invasive coronary angiography (ICA) efficiency, defined as the ICA with nonobstructive disease (stenosis <50%) and the ratio of revascularization to ICA (REV-to-ICA ratio) within 90 days. The second endpoint was the incidence of major adverse cardiaovascular events (MACE) at 2 years. A total of 461 patients (267 [57.9%] men; median age, 64 [55-69]) were included. At 90 days, the rate of ICA with nonobstructive disease in the CT-FFR+CCTA group was lower than in the CCTA group (14.7% vs 34.0%, P=.047). The REV-to-ICA ratio in the CT-FFR+CCTA group was significantly higher than in the CCTA group (73.5% vs. 50.9%, P=.036). No significant difference in ICA efficiency was found in intermediate stenosis (25%-69%) between the 2 groups (all P>.05). After a median follow-up of 23 (22-24) months, MACE were observed in 11 patients in the CT-FFR+CCTA group and 24 in the CCTA group (5.9% vs 10.0%, P=.095). The on-site deep learning-based CT-FFR improved the efficiency of ICA utilization with a similarly low rate of MACE compared with CCTA alone. The on-site deep learning-based CT-FFR was superior to CCTA for therapeutic management.
Page 184 of 2352345 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.