Sort by:
Page 17 of 3433422 results

The comparison of deep learning and radiomics in the prediction of polymyositis.

Wu G, Li B, Li T, Liu L

pubmed logopapersSep 12 2025
T2 weighted magnetic resonance imaging has become a commonly used noninvasive examination method for the diagnosis of Polymyositis (PM). The data regarding the comparison of deep learning and radiomics in the diagnosis of PM is still lacking. This study investigates the feasibility of 3D convolutional neural network (CNN) in the prediction of PM, with comparison to radiomics. A total of 120 patients (with 60 PM) were from center A, and 30 (with 15 PM) were from B, and 46 (with 23 PM) were from C. The data from center A was used as training data, and data from B as validation data, and data from C as external test data. The magnetic resonance radiomics features of rectus femoris were obtained for all cases. The maximum correlation minimum redundancy and least absolute shrinkage and selection operator regression were used before establishing a radiomics score model. A 3D CNN classification model was trained with "monai" based on 150 data with labels. A 3D Unet segmentation model was also trained with "monai" based on 196 original data and their segmentation of rectus femoris. The accuracy on the external test data was compared between 2 methods by using the paired chi-square test. PM and non-PM cases did not differ in age or gender (P > .05). The 3D CNN classification model achieved accuracy of 97% in validation data. The sensitivity, specificity, accuracy and positive predictive value of the 3D CNN classification model in the external test data were 96% (22/23), 91% (21/23), 93% (43/46), and 92% (22/24), respectively. The radiomics score achieved accuracy of 90% in the validation data. The sensitivity, specificity, accuracy, and positive predictive value of the radiomics score in the external test data were 70% (16/23), 65% (15/23), 67% (31/46), and 67% (16/24), respectively, significantly lower than that of CNN model (P = .035). The 3D segmentation model for rectus femoris on T2 weighted magnetic resonance images was obtained with dice similarity coefficient of 0.71. 3D CNN model is not inferior to radiomics score in the prediction of PM. The combination of deep learning and radiomics is recommended for the evaluation of PM in future clinical practice.

Diagnostic performance of ChatGPT-4.0 in elbow fracture detection: A comparative study of radial head, distal humerus, and olecranon fractures.

Gültekin A, Gök Ü, Uyar AÇ, Serarslan U, Bitlis AT

pubmed logopapersSep 12 2025
Artificial intelligence has been increasingly used for radiographic fracture detection in recent years. However, its performance in the diagnosis of displaced and non-displaced fractures in specific anatomical regions has not been sufficiently investigated. This study aimed to evaluate the accuracy and sensitivity of Chat Generative Pretrained Transformer (ChatGPT-4.0) in the diagnosis of radial head, distal humerus and olecranon fractures. Anonymized radiographs, previously confirmed by an expert radiologist and orthopedist, were evaluated. Anteroposterior and lateral radiographs of 266 patients were analyzed. Each fracture site was divided into 2 groups: displaced and non-displaced. ChatGPT-4.0 asked 2 questions to indicate whether each image was broken. Responses were categorized as "fracture detected in the first question," "fracture detected in the second question," or "no fracture detected." ChatGPT-4.0 showed a significantly higher accuracy in diagnosing displaced fractures at all sites (P < .001). The highest fracture detection rate in the first question was observed for displaced distal humeral fractures (87.7%). The success rate was significantly lower in non-displaced fractures, and in the non-displaced group the highest diagnostic rate was observed in radial head fractures (25.3%). No statistically significant difference was found in pairwise sensitivity comparisons between non-displaced fractures (P > .05). ChatGPT-4.0 shows promising diagnostic performance in the detection of displaced olecranon, radial head and distal humeral fractures. However, its limited success in non-displaced fractures indicates that the model requires further training and development before clinical use. Level 3.

Machine-learning model for differentiating round pneumonia and primary lung cancer using CT-based radiomic analysis.

Genç H, Yildirim M

pubmed logopapersSep 12 2025
Round pneumonia is a benign lung condition that can radiologically mimic primary lung cancer, making diagnosis challenging. Accurately distinguishing between these diseases is critical to avoid unnecessary invasive procedures. This study aims to distinguish round pneumonia from primary lung cancer by developing machine-learning models based on radiomic features extracted from computed tomography (CT) images. This retrospective observational study included 24 patients diagnosed with round pneumonia and 24 with histopathologically confirmed primary lung cancer. The lesions were manually segmented on the CT images by 2 radiologists. In total, 107 radiomic features were extracted from each case. Feature selection was performed using an information-gain algorithm to identify the 5 most relevant features. Seven machine-learning classifiers (Naïve Bayes, support vector machine, Random Forest, Decision Tree, Neural Network, Logistic Regression, and k-NN) were trained and validated. The model performance was evaluated using AUC, classification accuracy, sensitivity, and specificity. The Naïve Bayes, support vector machine, and Random Forest models achieved perfect classification performance on the entire dataset (AUC = 1.000). After feature selection, the Naïve Bayes model maintained a high performance with an AUC of 1.000, accuracy of 0.979, sensitivity of 0.958, and specificity of 1.000. Machine-learning models using CT-based radiomics features can effectively differentiate round pneumonia from primary lung cancer. These models offer a promising noninvasive tool to aid in radiological diagnosis and reduce diagnostic uncertainty.

Machine learning model based on the radiomics features of CE-CBBCT shows promising predictive ability for HER2-positive BC.

Chen X, Li M, Liang X, Su D

pubmed logopapersSep 12 2025
This study aimed to investigate whether establishing a machine learning (ML) model based on contrast-enhanced cone-beam breast computed tomography (CE-CBBCT) radiomic features could predict human epidermal growth factor receptor 2-positive breast cancer (BC). Eighty-eight patients diagnosed with invasive BC who underwent preoperative CE-CBBCT were retrospectively enrolled. Patients were randomly assigned to the training and testing cohorts at a ratio of approximately 7:3. A total of 1046 quantitative radiomics features were extracted from the CE-CBBCT images using PyRadiomics. Z-score normalization was used to standardize the radiomics features, and Pearson correlation coefficient and one-way analysis of variance were used to explore the significant features. Six ML algorithms (support vector machine, random forest [RF], logistic regression, adaboost, linear discriminant analysis, and decision tree) were used to construct optimal predictive models. Receiver operating characteristic curves were constructed and the area under the curve (AUC) was calculated. Four top-performing radiomic models were selected to develop the 6 predictive features. The AUC values for support vector machine, linear discriminant analysis, RF, logistic regression, adaboost, and decision tree were 0.741, 0.753, 1.000, 0.752, 1.000, and 1.000, respectively, in the training cohort, and 0.700, 0.671, 0.806, 0.665, 0.706, and 0.712, respectively, in the testing cohort. Notably, the RF model exhibited the highest predictive ability with an AUC of 0.806 in the testing cohort. For the RF model, the DeLong test showed statistically significant differences in the AUC between the training and testing cohorts (Z = 2.105, P = .035). The ML model based on CE-CBBCT radiomics features showed promising predictive ability for human epidermal growth factor receptor 2-positive BC, with the RF model demonstrating the best diagnostic performance.

Novel BDefRCNLSTM: an efficient ensemble deep learning approaches for enhanced brain tumor detection and categorization with segmentation.

Janapati M, Akthar S

pubmed logopapersSep 11 2025
Brain tumour detection and classification are critical for improving patient prognosis and treatment planning. However, manual identification from magnetic resonance imaging (MRI) scans is time-consuming, error-prone, and reliant on expert interpretation. The increasing complexity of tumour characteristics necessitates automated solutions to enhance accuracy and efficiency. This study introduces a novel ensemble deep learning model, boosted deformable and residual convolutional network with bi-directional convolutional long short-term memory (BDefRCNLSTM), for the classification and segmentation of brain tumours. The proposed framework integrates entropy-based local binary pattern (ELBP) for extracting spatial semantic features and employs the enhanced sooty tern optimisation (ESTO) algorithm for optimal feature selection. Additionally, an improved X-Net model is utilised for precise segmentation of tumour regions. The model is trained and evaluated on Figshare, Brain MRI, and Kaggle datasets using multiple performance metrics. Experimental results demonstrate that the proposed BDefRCNLSTM model achieves over 99% accuracy in both classification and segmentation, outperforming existing state-of-the-art approaches. The findings establish the proposed approach as a clinically viable solution for automated brain tumour diagnosis. The integration of optimised feature selection and advanced segmentation techniques improves diagnostic accuracy, potentially assisting radiologists in making faster and more reliable decisions.

Ultrasam: a foundation model for ultrasound using large open-access segmentation datasets.

Meyer A, Murali A, Zarin F, Mutter D, Padoy N

pubmed logopapersSep 11 2025
Automated ultrasound (US) image analysis remains a longstanding challenge due to anatomical complexity and the scarcity of annotated data. Although large-scale pretraining has improved data efficiency in many visual domains, its impact in US is limited by a pronounced domain shift from other imaging modalities and high variability across clinical applications, such as chest, ovarian, and endoscopic imaging. To address this, we propose UltraSam, a SAM-style model trained on a heterogeneous collection of publicly available segmentation datasets, originally developed in isolation. UltraSam is trained under the prompt-conditioned segmentation paradigm, which eliminates the need for unified labels and enables generalization to a broad range of downstream tasks. We compile US-43d, a large-scale collection of 43 open-access US datasets comprising over 282,000 images with segmentation masks covering 58 anatomical structures. We explore adaptation and fine-tuning strategies for SAM and systematically evaluate transferability across downstream tasks, comparing against state-of-the-art pretraining methods. We further propose prompted classification, a new use case where object-specific prompts and image features are jointly decoded to improve classification performance. In experiments on three diverse public US datasets, UltraSam outperforms existing SAM variants on prompt-based segmentation and surpasses self-supervised US foundation models on downstream (prompted) classification and instance segmentation tasks. UltraSam demonstrates that SAM-style training on diverse, sparsely annotated US data enables effective generalization across tasks. By unlocking the value of fragmented public datasets, our approach lays the foundation for scalable, real-world US representation learning. We release our code and pretrained models at https://github.com/CAMMA-public/UltraSam and invite the community to further this effort by continuing to contribute high-quality datasets.

Artificial intelligence in gastric cancer: a systematic review of machine learning and deep learning applications.

Alsallal M, Habeeb MS, Vaghela K, Malathi H, Vashisht A, Sahu PK, Singh D, Al-Hussainy AF, Aljanaby IA, Sameer HN, Athab ZH, Adil M, Yaseen A, Farhood B

pubmed logopapersSep 11 2025
Gastric cancer (GC) remains a major global health concern, ranking as the fifth most prevalent malignancy and the fourth leading cause of cancer-related mortality worldwide. Although early detection can increase the 5-year survival rate of early gastric cancer (EGC) to over 90%, more than 80% of cases are diagnosed at advanced stages due to subtle clinical symptoms and diagnostic challenges. Artificial intelligence (AI), particularly machine learning (ML) and deep learning (DL), has shown great promise in addressing these limitations. This systematic review aims to evaluate the performance, applications, and limitations of ML and DL models in GC management, with a focus on their use in detection, diagnosis, treatment planning, and prognosis prediction across diverse clinical imaging and data modalities. Following the PRISMA 2020 guidelines, a comprehensive literature search was conducted in MEDLINE, Web of Science, and Scopus for studies published between 2004 and May 2025. Eligible studies applied ML or DL algorithms for diagnostic or prognostic tasks in GC using data from endoscopy, computed tomography (CT), pathology, or multi-modal sources. Two reviewers independently performed study selection, data extraction, and risk of bias assessment. A total of 59 studies met the inclusion criteria. DL models, particularly convolutional neural networks (CNNs), demonstrated strong performance in EGC detection, with reported sensitivities up to 95.3% and Area Under the Curve (AUCs) as high as 0.981, often exceeding expert endoscopists. CT-based radiomics and DL models achieved AUCs ranging from 0.825 to 0.972 for tumor staging and metastasis prediction. Pathology-based models reported accuracies up to 100% for EGC detection and AUCs up to 0.92 for predicting treatment response. Cross-modality approaches combining radiomics and pathomics achieved AUCs up to 0.951. Key challenges included algorithmic bias, limited dataset diversity, interpretability issues, and barriers to clinical integration. ML and DL models have demonstrated substantial potential to improve early detection, diagnostic accuracy, and individualized treatment in GC. To advance clinical adoption, future research should prioritize the development of large, diverse datasets, implement explainable AI frameworks, and conduct prospective clinical trials. These efforts will be essential for integrating AI into precision oncology and addressing the increasing global burden of gastric cancer.

A full-scale attention-augmented CNN-transformer model for segmentation of oropharyngeal mucosa organs-at-risk in radiotherapy.

He L, Sun J, Lu S, Li J, Wang X, Yan Z, Guan J

pubmed logopapersSep 11 2025
Radiation-induced oropharyngeal mucositis (ROM) is a common and severe side effect of radiotherapy in nasopharyngeal cancer patients, leading to significant clinical complications such as malnutrition, infections, and treatment interruptions. Accurate delineation of the oropharyngeal mucosa (OPM) as an organ-at-risk (OAR) is crucial to minimizing radiation exposure and preventing ROM. This study aims to develop and validate an advanced automatic segmentation model, attention-augmented Swin U-Net transformer (AA-Swin UNETR), for accurate delineation of OPM to improve radiotherapy planning and reduce the incidence of ROM. We proposed a hybrid CNN-transformer model, AA-Swin UNETR, based on the Swin UNETR framework, which integrates hierarchical feature extraction with full-scale attention mechanisms. The model includes a Swin Transformer-based encoder and a CNN-based decoder with residual blocks, connected via a full-scale feature connection scheme. The full-scale attention mechanism enables the model to capture long-range dependencies and multi-level features effectively, enhancing the segmentation accuracy. The model was trained on a dataset of 202 CT scans from Nanfang Hospital, using expert manual delineations as the gold standard. We evaluated the performance of AA-Swin UNETR against state-of-the-art (SOTA) segmentation models, including Swin UNETR, nnUNet, and 3D UX-Net, using geometric and dosimetric evaluation parameters. The geometric metrics include Dice similarity coefficient (DSC), surface DSC (sDSC), volume similarity (VS), Hausdorff distance (HD), precision, and recall. The dosimetric metrics include changes of D<sub>0.1 cc</sub> and D<sub>mean</sub> between results derived from manually delineated OPM and auto-segmentation models. The AA-Swin UNETR model achieved the highest mean DSC of 87.72 ± 1.98%, significantly outperforming Swin UNETR (83.53 ± 2.59%), nnUNet (85.48%± 2.68), and 3D UX-Net (80.04 ± 3.76%). The model also showed superior mean sDSC (98.44 ± 1.08%), mean VS (97.86 ± 1.43%), mean precision (87.60 ± 3.06%) and mean recall (89.22 ± 2.70%), with a competitive mean HD of 9.03 ± 2.79 mm. For dosimetric evaluation, the proposed model generates smallest mean [Formula: see text] (0.46 ± 4.92 cGy) and mean [Formula: see text] (6.26 ± 24.90 cGY) relative to manual delineation compared with other auto-segmentation results (mean [Formula: see text] of Swin UNETR = -0.56 ± 7.28 cGy, nnUNet = 0.99 ± 4.73 cGy, 3D UX-Net = -0.65 ± 8.05 cGy; mean [Formula: see text] of Swin UNETR = 7.46 ± 43.37, nnUNet = 21.76 ± 37.86 and 3D UX-Net = 44.61 ± 62.33). In this paper, we proposed a transformer and CNN hybrid deep-learning based model AA-Swin UNETR for automatic segmentation of OPM as an OAR structure in radiotherapy planning. Evaluations with geometric and dosimetric parameters demonstrated AA-Swin UNETR can generate delineations close to a manual reference, both in terms of geometry and dose-volume metrics. The proposed model out-performed existing SOTA models in both evaluation metrics and demonstrated is capability of accurately segmenting complex anatomical structures of the OPM, providing a reliable tool for enhancing radiotherapy planning.

Mechanistic Learning with Guided Diffusion Models to Predict Spatio-Temporal Brain Tumor Growth

Daria Laslo, Efthymios Georgiou, Marius George Linguraru, Andreas Rauschecker, Sabine Muller, Catherine R. Jutzeler, Sarah Bruningk

arxiv logopreprintSep 11 2025
Predicting the spatio-temporal progression of brain tumors is essential for guiding clinical decisions in neuro-oncology. We propose a hybrid mechanistic learning framework that combines a mathematical tumor growth model with a guided denoising diffusion implicit model (DDIM) to synthesize anatomically feasible future MRIs from preceding scans. The mechanistic model, formulated as a system of ordinary differential equations, captures temporal tumor dynamics including radiotherapy effects and estimates future tumor burden. These estimates condition a gradient-guided DDIM, enabling image synthesis that aligns with both predicted growth and patient anatomy. We train our model on the BraTS adult and pediatric glioma datasets and evaluate on 60 axial slices of in-house longitudinal pediatric diffuse midline glioma (DMG) cases. Our framework generates realistic follow-up scans based on spatial similarity metrics. It also introduces tumor growth probability maps, which capture both clinically relevant extent and directionality of tumor growth as shown by 95th percentile Hausdorff Distance. The method enables biologically informed image generation in data-limited scenarios, offering generative-space-time predictions that account for mechanistic priors.

DualTrack: Sensorless 3D Ultrasound needs Local and Global Context

Paul F. R. Wilson, Matteo Ronchetti, Rüdiger Göbl, Viktoria Markova, Sebastian Rosenzweig, Raphael Prevost, Parvin Mousavi, Oliver Zettinig

arxiv logopreprintSep 11 2025
Three-dimensional ultrasound (US) offers many clinical advantages over conventional 2D imaging, yet its widespread adoption is limited by the cost and complexity of traditional 3D systems. Sensorless 3D US, which uses deep learning to estimate a 3D probe trajectory from a sequence of 2D US images, is a promising alternative. Local features, such as speckle patterns, can help predict frame-to-frame motion, while global features, such as coarse shapes and anatomical structures, can situate the scan relative to anatomy and help predict its general shape. In prior approaches, global features are either ignored or tightly coupled with local feature extraction, restricting the ability to robustly model these two complementary aspects. We propose DualTrack, a novel dual-encoder architecture that leverages decoupled local and global encoders specialized for their respective scales of feature extraction. The local encoder uses dense spatiotemporal convolutions to capture fine-grained features, while the global encoder utilizes an image backbone (e.g., a 2D CNN or foundation model) and temporal attention layers to embed high-level anatomical features and long-range dependencies. A lightweight fusion module then combines these features to estimate the trajectory. Experimental results on a large public benchmark show that DualTrack achieves state-of-the-art accuracy and globally consistent 3D reconstructions, outperforming previous methods and yielding an average reconstruction error below 5 mm.
Page 17 of 3433422 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.