Sort by:
Page 185 of 3363359 results

Enhancing cancer diagnostics through a novel deep learning-based semantic segmentation algorithm: A low-cost, high-speed, and accurate approach.

Benabbou T, Sahel A, Badri A, Mourabit IE

pubmed logopapersJun 26 2025
Deep learning-based semantic segmentation approaches provide an efficient and automated means for cancer diagnosis and monitoring, which is important in clinical applications. However, implementing these approaches outside the experimental environment and using them in real-world applications requires powerful and adequate hardware resources, which are not available in most hospitals, especially in low- and middle-income countries. Consequently, clinical settings will never use most of these algorithms, or at best, their adoption will be relatively limited. To address these issues, some approaches that reduce computational costs were proposed, but they performed poorly and failed to produce satisfactory results. Therefore, finding a method that overcomes these limitations without losing performance is highly challenging. To face this challenge, our study proposes a novel, optimal convolutional neural network-based approach for medical image segmentation that consists of multiple synthesis and analysis paths connected through a series of long skip connections. The design leverages multi-scale convolution, multi-scale feature extraction, downsampling strategies, and feature map fusion methods, all of which have proven effective in enhancing performance. This framework was extensively evaluated against current state-of-the-art architectures on various medical image segmentation tasks, including lung tumors, spleen, and pancreatic tumors. The results of these experiments conclusively demonstrate the efficacy of the proposed approach in outperforming existing state-of-the-art methods across multiple evaluation metrics. This superiority is further enhanced by the framework's ability to minimize the computational complexity and decrease the number of parameters required, resulting in greater segmentation accuracy, faster processing, and better implementation efficiency.

Constructing high-quality enhanced 4D-MRI with personalized modeling for liver cancer radiotherapy.

Yao Y, Chen B, Wang K, Cao Y, Zuo L, Zhang K, Chen X, Kuo M, Dai J

pubmed logopapersJun 26 2025
For magnetic resonance imaging (MRI), a short acquisition time and good image quality are incompatible. Thus, reconstructing time-resolved volumetric MRI (4D-MRI) to delineate and monitor thoracic and upper abdominal tumor movements is a challenge. Existing MRI sequences have limited applicability to 4D-MRI. A method is proposed for reconstructing high-quality personalized enhanced 4D-MR images. Low-quality 4D-MR images are scanned followed by deep learning-based personalization to generate high-quality 4D-MR images. High-speed multiphase 3D fast spoiled gradient recalled echo (FSPGR) sequences were utilized to generate low-quality enhanced free-breathing 4D-MR images and paired low-/high-quality breath-holding 4D-MR images for 58 liver cancer patients. Then, a personalized model guided by the paired breath-holding 4D-MR images was developed for each patient to cope with patient heterogeneity. The 4D-MR images generated by the personalized model were of much higher quality compared with the low-quality 4D-MRI images obtained by conventional scanning as demonstrated by significant improvements in the peak signal-to-noise ratio, structural similarity, normalized root mean square error, and cumulative probability of blur detection. The introduction of individualized information helped the personalized model demonstrate a statistically significant improvement compared to the general model (p < 0.001). The proposed method can be used to quickly reconstruct high-quality 4D-MR images and is potentially applicable to radiotherapy for liver cancer.

Machine Learning Models for Predicting Mortality in Pneumonia Patients.

Pavlovic V, Haque MS, Grubor N, Pavlovic A, Stanisavljevic D, Milic N

pubmed logopapersJun 26 2025
Pneumonia remains a significant cause of hospital mortality, prompting the need for precise mortality prediction methods. This study conducted a systematic review identifying predictors of mortality using Machine Learning (ML) and applied these methods to hospitalized pneumonia patients at the University Clinical Centre Zvezdara. The systematic review identified 16 studies (313,572 patients), revealing common mortality predictors including age, oxygen levels, and albumin. A Random Forest (RF) model was developed using local data (n=343), achieving an accuracy of 99%, and AUC of 0.99. Key predictors identified were chest X-ray worsening, ventilator use, age, and oxygen support. ML demonstrated high potential for accurately predicting pneumonia mortality, surpassing traditional severity scores, and highlighting its practical clinical utility.

Implementation of an Intelligent System for Detecting Breast Cancer Cells from Histological Images, and Evaluation of Its Results at CHU Bogodogo.

Nikiema WC, Ouattara TA, Barro SG, Ouedraogo AS

pubmed logopapersJun 26 2025
Early detection of breast cancer is a major challenge in the fight against this disease. Artificial intelligence (AI), particularly through medical imaging, offers promising prospects for improving diagnostic accuracy. This article focuses on evaluating the effectiveness of an intelligent electronic system deployed at the CHU of Bogodogo in Burkina Faso, designed to detect breast cancer cells from histological images. The system aims to reduce diagnosis time and enhance screening reliability. The article also discusses the challenges, innovations, and prospects for integrating the system into the conventional laboratory examination process, while considering the associated ethical and technical issues.

Recent Advances in Generative Models for Synthetic Brain MRI Image Generation.

Ding X, Bai L, Abbasi SF, Pournik O, Arvanitis T

pubmed logopapersJun 26 2025
With the use of artificial intelligence (AI) for image analysis of Magnetic Resonance Imaging (MRI), the lack of training data has become an issue. Realistic synthetic MRI images can serve as a solution and generative models have been proposed. This study investigates the most recent advances on synthetic brain MRI image generation with AI-based generative models. A search has been conducted on the relevant studies published within the last three years, followed by a narrative review on the identified articles. Popular models from the search results have been discussed in this study, including Generative Adversarial Networks (GANs), diffusion models, Variational Autoencoders (VAEs), and transformers.

Enhancing Diagnostic Precision: Utilising a Large Language Model to Extract U Scores from Thyroid Sonography Reports.

Watts E, Pournik O, Allington R, Ding X, Boelaert K, Sharma N, Ghalichi L, Arvanitis TN

pubmed logopapersJun 26 2025
This study evaluates the performance of ChatGPT-4, a Large Language Model (LLM), in automatically extracting U scores from free-text thyroid ultrasound reports collected from University Hospitals Birmingham (UHB), UK, between 2014 and 2024. The LLM was provided with guidelines on the U classification system and extracted U scores independently from 14,248 de-identified reports, without access to human-assigned scores. The LLM-extracted scores were compared to initial clinician-assigned and refined U scores provided by expert reviewers. The LLM achieved 97.7% agreement with refined human U scores, successfully identifying the highest U score in 98.1% of reports with multiple nodules. Most discrepancies (2.5%) were linked to ambiguous descriptions, multi-nodule reports, and cases with human-documented uncertainty. While the results demonstrate the potential for LLMs to improve reporting consistency and reduce manual workload, ethical and governance challenges such as transparency, privacy, and bias must be addressed before routine clinical deployment. Embedding LLMs into reporting workflows, such as Online Analytical Processing (OLAP) tools, could further enhance reporting quality and consistency.

Artificial Intelligence in Cognitive Decline Diagnosis: Evaluating Cutting-Edge Techniques and Modalities.

Gharehbaghi A, Babic A

pubmed logopapersJun 26 2025
This paper presents the results of a scoping review that examines potentials of Artificial Intelligence (AI) in early diagnosis of Cognitive Decline (CD), which is regarded as a key issue in elderly health. The review encompasses peer-reviewed publications from 2020 to 2025, including scientific journals and conference proceedings. Over 70% of the studies rely on using magnetic resonance imaging (MRI) as the input to the AI models, with a high diagnostic accuracy of 98%. Integration of the relevant clinical data and electroencephalograms (EEG) with deep learning methods enhances diagnostic accuracy in the clinical settings. Recent studies have also explored the use of natural language processing models for detecting CD at its early stages, with an accuracy of 75%, exhibiting a high potential to be used in the appropriate pre-clinical environments.

Deep transfer learning radiomics combined with explainable machine learning for preoperative thymoma risk prediction based on CT.

Wu S, Fan L, Wu Y, Xu J, Guo Y, Zhang H, Xu Z

pubmed logopapersJun 26 2025
To develop and validate a computerized tomography (CT)‑based deep transfer learning radiomics model combined with explainable machine learning for preoperative risk prediction of thymoma. This retrospective study included 173 pathologically confirmed thymoma patients from our institution in the training group and 93 patients from two external centers in the external validation group. Tumors were classified according to the World Health Organization simplified criteria as low‑risk types (A, AB, and B1) or high‑risk types (B2 and B3). Radiomics features and deep transfer learning features were extracted from venous‑phase contrast‑enhanced CT images by using a modified Inception V3 network. Principal component analysis and least absolute shrinkage and selection operator regression identified 20 key predictors. Six classifiers-decision tree, gradient boosting machine, k‑nearest neighbors, naïve Bayes, random forest (RF), and support vector machine-were trained on five feature sets: CT imaging model, radiomics feature model, deep transfer learning feature model, combined feature model, and combined model. Interpretability was assessed with SHapley Additive exPlanations (SHAP), and an interactive web application was developed for real‑time individualized risk prediction and visualization. In the external validation group, the RF classifier achieved the highest area under the receiver operating characteristic curve (AUC) value of 0.956. In the training group, the AUC values for the CT imaging model, radiomics feature model, deep transfer learning feature model, combined feature model, and combined model were 0.684, 0.831, 0.815, 0.893, and 0.910, respectively. The corresponding AUC values in the external validation group were 0.604, 0.865, 0.880, 0.934, and 0.956, respectively. SHAP visualizations revealed the relative contribution of each feature, while the web application provided real‑time individual prediction probabilities with interpretative outputs. We developed a CT‑based deep transfer learning radiomics model combined with explainable machine learning and an interactive web application; this model achieved high accuracy and transparency for preoperative thymoma risk stratification, facilitating personalized clinical decision‑making.

Design and Optimization of an automatic deep learning-based cerebral reperfusion scoring (TICI) using thrombus localization.

Folcher A, Piters J, Wallach D, Guillard G, Ognard J, Gentric JC

pubmed logopapersJun 26 2025
The Thrombolysis in Cerebral Infarction (TICI) scale is widely used to assess angiographic outcomes of mechanical thrombectomy despite significant variability. Our objective was to create and optimize an artificial intelligence (AI)-based classification model for digital subtraction angiography (DSA) TICI scoring. Using a monocentric DSA dataset of thrombectomies, and a platform for medical image analysis, independent readers labeled each series according to TICI score and marked each thrombus. A convolutional neural network (CNN) classification model was created to classify TICI scores, into 2 groups (TICI 0,1 or 2a versus TICI 2b, 2c or 3) and 3 groups (TICI 0,1 or 2a versus TICI 2b versus TICI 2c or 3). The algorithm was first tested alone, and then thrombi positions were introduced to the algorithm by manual placement firstly, then after using a thrombus detection module. A total of 422 patients were enrolled in the study. 2492 thrombi were annotated on the TICI-labeled series. The model trained on a total of 1609 DSA series. The classification model into two classes had a specificity of 0.97 ±0.01 and a sensibility of 0.86 ±0.01. The 3-class models showed insufficient performance, even when combined with the true thrombi positions, with, respectively, F1 scores for TICI 2b classification of 0.50 and 0.55 ±0.07. The automatic thrombus detection module did not enhance the performance of the 3-class model, with a F1 score for the TICI 2b class measured at 0.50 ±0.07. The AI model provided a reproducible 2-class (TICI 0,1 or 2a versus 2b, 2c or 3) classification according to TICI scale. Its performance in distinguishing three classes (TICI 0,1 or 2a versus 2b versus 2c or 3) remains insufficient for clinical practice. Automatic thrombus detection did not improve the model's performance.

Predicting brain metastases in EGFR-positive lung adenocarcinoma patients using pre-treatment CT lung imaging data.

He X, Guan C, Chen T, Wu H, Su L, Zhao M, Guo L

pubmed logopapersJun 26 2025
This study aims to establish a dual-feature fusion model integrating radiomic features with deep learning features, utilizing single-modality pre-treatment lung CT image data to achieve early warning of brain metastasis (BM) risk within 2 years in EGFR-positive lung adenocarcinoma. After rigorous screening of 362 EGFR-positive lung adenocarcinoma patients with pre-treatment lung CT images, 173 eligible participants were ultimately enrolled in this study, including 93 patients with BM and 80 without BM. Radiomic features were extracted from manually segmented lung nodule regions, and a selection of features was used to develop radiomics models. For deep learning, ROI-level CT images were processed using several deep learning networks, including the novel vision mamba, which was applied for the first time in this context. A feature-level fusion model was developed by combining radiomic and deep learning features. Model performance was assessed using receiver operating characteristic (ROC) curves and decision curve analysis (DCA), with statistical comparisons of area under the curve (AUC) values using the DeLong test. Among the models evaluated, the fused vision mamba model demonstrated the best classification performance, achieving an AUC of 0.86 (95% CI: 0.82-0.90), with a recall of 0.88, F1-score of 0.70, and accuracy of 0.76. This fusion model outperformed both radiomics-only and deep learning-only models, highlighting its superior predictive accuracy for early BM risk detection in EGFR-positive lung adenocarcinoma patients. The fused vision mamba model, utilizing single CT imaging data, significantly enhances the prediction of brain metastasis within two years in EGFR-positive lung adenocarcinoma patients. This novel approach, combining radiomic and deep learning features, offers promising clinical value for early detection and personalized treatment.
Page 185 of 3363359 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.