Sort by:
Page 135 of 3473463 results

Machine learning models for discriminating clinically significant from clinically insignificant prostate cancer using bi-parametric magnetic resonance imaging.

Ayyıldız H, İnce O, Korkut E, Dağoğlu Kartal MG, Tunacı A, Ertürk ŞM

pubmed logopapersJul 8 2025
This study aims to demonstrate the performance of machine learning algorithms to distinguish clinically significant prostate cancer (csPCa) from clinically insignificant prostate cancer (ciPCa) in prostate bi-parametric magnetic resonance imaging (MRI) using radiomics features. MRI images of patients who were diagnosed with cancer with histopathological confirmation following prostate MRI were collected retrospectively. Patients with a Gleason score of 3+3 were considered to have clinically ciPCa, and patients with a Gleason score of 3+4 and above were considered to have csPCa. Radiomics features were extracted from T2-weighted (T2W) images, apparent diffusion coefficient (ADC) images, and their corresponding Laplacian of Gaussian (LoG) filtered versions. Additionally, a third feature subset was created by combining the T2W and ADC images, enhancing the analysis with an integrated approach. Once the features were extracted, Pearson’s correlation coefficient and selection were performed using wrapper-based sequential algorithms. The models were then built using support vector machine (SVM) and logistic regression (LR) machine learning algorithms. The models were validated using a five-fold cross-validation technique. This study included 77 patients, 30 with ciPCA and 47 with csPCA. From each image, four images were extracted with LoG filtering, and 111 features were obtained from each image. After feature selection, 5 features were obtained from T2W images, 5 from ADC images, and 15 from the combined dataset. In the SVM model, area under the curve (AUC) values of 0.64 for T2W, 0.86 for ADC, and 0.86 for the combined dataset were obtained in the test set. In the LR model, AUC values of 0.79 for T2W, 0.86 for ADC, and 0.85 for the combined dataset were obtained. Machine learning models developed with radiomics can provide a decision support system to complement pathology results and help avoid invasive procedures such as re-biopsies or follow-up biopsies that are sometimes necessary today. This study demonstrates that machine learning models using radiomics features derived from bi-parametric MRI can discriminate csPCa from clinically insignificant PCa. These findings suggest that radiomics-based machine learning models have the potential to reduce the need for re-biopsy in cases of indeterminate pathology, assist in diagnosing pathology–radiology discordance, and support treatment decision-making in the management of PCa.

Integrating Machine Learning into Myositis Research: a Systematic Review.

Juarez-Gomez C, Aguilar-Vazquez A, Gonzalez-Gauna E, Garcia-Ordoñez GP, Martin-Marquez BT, Gomez-Rios CA, Becerra-Jimenez J, Gaspar-Ruiz A, Vazquez-Del Mercado M

pubmed logopapersJul 8 2025
Idiopathic inflammatory myopathies (IIM) are a group of autoimmune rheumatic diseases characterized by proximal muscle weakness and extra muscular manifestations. Since 1975, these IIM have been classified into different clinical phenotypes. Each clinical phenotype is associated with a better or worse prognosis and a particular physiopathology. Machine learning (ML) is a fascinating field of knowledge with worldwide applications in different fields. In IIM, ML is an emerging tool assessed in very specific clinical contexts as a complementary tool for research purposes, including transcriptome profiles in muscle biopsies, differential diagnosis using magnetic resonance imaging (MRI), and ultrasound (US). With the cancer-associated risk and predisposing factors for interstitial lung disease (ILD) development, this systematic review evaluates 23 original studies using supervised learning models, including logistic regression (LR), random forest (RF), support vector machines (SVM), and convolutional neural networks (CNN), with performance assessed primarily through the area under the curve coupled with the receiver operating characteristic (AUC-ROC).

The future of multimodal artificial intelligence models for integrating imaging and clinical metadata: a narrative review.

Simon BD, Ozyoruk KB, Gelikman DG, Harmon SA, Türkbey B

pubmed logopapersJul 8 2025
With the ongoing revolution of artificial intelligence (AI) in medicine, the impact of AI in radiology is more pronounced than ever. An increasing number of technical and clinical AI-focused studies are published each day. As these tools inevitably affect patient care and physician practices, it is crucial that radiologists become more familiar with the leading strategies and underlying principles of AI. Multimodal AI models can combine both imaging and clinical metadata and are quickly becoming a popular approach that is being integrated into the medical ecosystem. This narrative review covers major concepts of multimodal AI through the lens of recent literature. We discuss emerging frameworks, including graph neural networks, which allow for explicit learning from non-Euclidean relationships, and transformers, which allow for parallel computation that scales, highlighting existing literature and advocating for a focus on emerging architectures. We also identify key pitfalls in current studies, including issues with taxonomy, data scarcity, and bias. By informing radiologists and biomedical AI experts about existing practices and challenges, we hope to guide the next wave of imaging-based multimodal AI research.

AI-enhanced patient-specific dosimetry in I-131 planar imaging with a single oblique view.

Jalilifar M, Sadeghi M, Emami-Ardekani A, Bitarafan-Rajabi A, Geravand K, Geramifar P

pubmed logopapersJul 8 2025
This study aims to enhance the dosimetry accuracy in <sup>131</sup>I planar imaging by utilizing a single oblique view and Monte Carlo (MC) validated dose point kernels (DPKs) alongside the integration of artificial intelligence (AI) for accurate dose prediction within planar imaging. Forty patients with thyroid cancers post-thyroidectomy surgery and 30 with neuroendocrine tumors underwent planar and SPECT/CT imaging. Using whole-body (WB) planar images with an additional oblique view, organ thicknesses were estimated. DPKs and organ-specific S-values were used to estimate the absorbed doses. Four AI algorithms- multilayer perceptron (MLP), linear regression, support vector regression model, decision tree, convolution neural network, and U-Net were used for dose estimation. Planar image counts, body thickness, patient BMI, age, S-values, and tissue attenuation coefficients were imported as input into the AI algorithm. To provide the ground truth, the CT-based segmentation generated binary masks for each organ, and the corresponding SPECT images were used for GATE MC dosimetry. The MLP-predicted dose values across all organs represented superior performance with the lowest mean absolute error in the liver but higher in the spleen and salivary glands. Notably, MLP-based dose estimations closely matched ground truth data with < 15% differences in most tissues. The MLP-estimated dose values present a robust patient-specific dosimetry approach capable of swiftly predicting absorbed doses in different organs using WB planar images and a single oblique view. This approach facilitates the implementation of 2D planar imaging as a pre-therapeutic technique for a more accurate assessment of the administrated activity.

A Deep Learning Model for Comprehensive Automated Bone Lesion Detection and Classification on Staging Computed Tomography Scans.

Simon BD, Harmon SA, Yang D, Belue MJ, Xu Z, Tetreault J, Pinto PA, Wood BJ, Citrin DE, Madan RA, Xu D, Choyke PL, Gulley JL, Turkbey B

pubmed logopapersJul 8 2025
A common site of metastases for a variety of cancers is the bone, which is challenging and time consuming to review and important for cancer staging. Here, we developed a deep learning approach for detection and classification of bone lesions on staging CTs. This study developed an nnUNet model using 402 patients' CTs, including prostate cancer patients with benign or malignant osteoblastic (blastic) bone lesions, and patients with benign or malignant osteolytic (lytic) bone lesions from various primary cancers. An expert radiologist contoured ground truth lesions, and the model was evaluated for detection on a lesion level. For classification performance, accuracy, sensitivity, specificity, and other metrics were calculated. The held-out test set consisted of 69 patients (32 with bone metastases). The AUC of AI-predicted burden of disease was calculated on a patient level. In the independent test set, 70% of ground truth lesions were detected (67% of malignant lesions and 72% of benign lesions). The model achieved accuracy of 85% in classifying lesions as malignant or benign (91% sensitivity and 81% specificity). Although AI identified false positives in several benign patients, the patient-level AUC was 0.82 using predicted disease burden proportion. Our lesion detection and classification AI model performs accurately and has the potential to correct physician errors. Further studies should investigate if the model can impact physician review in terms of detection rate, classification accuracy, and review time.

Automated instance segmentation and registration of spinal vertebrae from CT-Scans with an improved 3D U-net neural network and corner point registration.

Hill J, Khokher MR, Nguyen C, Adcock M, Li R, Anderson S, Morrell T, Diprose T, Salvado O, Wang D, Tay GK

pubmed logopapersJul 8 2025
This paper presents a rapid and robust approach for 3D volumetric segmentation, labelling, and registration of human spinal vertebrae from CT scans using an optimised and improved 3D U-Net neural network architecture. The network is designed by incorporating residual and dense interconnections, followed by an extensive evaluation of different network setups by optimising the network components like activation functions, optimisers, and pooling operations. In addition, the network architecture is optimised for varying numbers of convolution layers per block and U-Net levels with fixed and cascading numbers of filters. For 3D virtual reality visualisation, the segmentation output of the improved 3D U-Net network is registered with the original scans through a corner point registration process. The registration takes into account the spatial coordinates of each segmented vertebra as a 3D volume and eight virtual fiducial markers to ensure alignment in all rotational planes. Trained on the VerSe'20 dataset, the proposed pipeline achieves a Dice score coefficient of 92.38% for vertebrae instance segmentation and a Hausdorff distance of 5.26 mm for vertebrae localisation on the VerSe'20 public test dataset, which outperforms many existing methods that participated in the VerSe'20 challenge. Integrated with Singular Health's MedVR software for virtual reality visualisation, the proposed solution has been deployed on standard edge-computing hardware in medical institutions. Depending on the scan size, the deployed solution takes between 90 and 210 s to label and segment vertebrae, including the cervical vertebrae. It is hoped that the acceleration of the segmentation and registration process will facilitate the easier preparation of future training datasets and benefit pre-surgical visualisation and planning.

Deep supervised transformer-based noise-aware network for low-dose PET denoising across varying count levels.

Azimi MS, Felfelian V, Zeraatkar N, Dadgar H, Arabi H, Zaidi H

pubmed logopapersJul 8 2025
Reducing radiation dose from PET imaging is essential to minimize cancer risks; however, it often leads to increased noise and degraded image quality, compromising diagnostic reliability. Recent advances in deep learning have shown promising results in addressing these limitations through effective denoising. However, existing networks trained on specific noise levels often fail to generalize across diverse acquisition conditions. Moreover, training multiple models for different noise levels is impractical due to data and computational constraints. This study aimed to develop a supervised Swin Transformer-based unified noise-aware (ST-UNN) network that handles diverse noise levels and reconstructs high-quality images in low-dose PET imaging. We present a Swin Transformer-based Noise-Aware Network (ST-UNN), which incorporates multiple sub-networks, each designed to address specific noise levels ranging from 1 % to 10 %. An adaptive weighting mechanism dynamically integrates the outputs of these sub-networks to achieve effective denoising. The model was trained and evaluated using PET/CT dataset encompassing the entire head and malignant lesions in the head and neck region. Performance was assessed using a combination of structural and statistical metrics, including the Structural Similarity Index (SSIM), Peak Signal-to-Noise Ratio (PSNR), Standardized Uptake Value (SUV) mean bias, SUV<sub>max</sub> bias, and Root Mean Square Error (RMSE). This comprehensive evaluation ensured reliable results for both global and localized regions within PET images. The ST-UNN consistently outperformed conventional networks, particularly in ultra-low-dose scenarios. At 1 % count level, it achieved a PSNR of 34.77, RMSE of 0.05, and SSIM of 0.97, notably surpassing the baseline networks. It also achieved the lowest SUV<sub>mean</sub> bias (0.08) and RMSE lesion (0.12) at this level. Across all count levels, ST-UNN maintained high performance and low error, demonstrating strong generalization and diagnostic integrity. ST-UNN offers a scalable, transformer-based solution for low-dose PET imaging. By dynamically integrating sub-networks, it effectively addresses noise variability and provides superior image quality, thereby advancing the capabilities of low-dose and dynamic PET imaging.

Post-hoc eXplainable AI methods for analyzing medical images of gliomas (- A review for clinical applications).

Ayaz H, Sümer-Arpak E, Ozturk-Isik E, Booth TC, Tormey D, McLoughlin I, Unnikrishnan S

pubmed logopapersJul 8 2025
Deep learning (DL) has shown promise in glioma imaging tasks using magnetic resonance imaging (MRI) and histopathology images, yet their complexity demands greater transparency in artificial intelligence (AI) systems. This is noticeable when users must understand the model output for a clinical application. In this systematic review, 65 post-hoc eXplainable AI (XAI), or interpretable AI studies, have been reviewed that provide an understanding of why a system generated a given output for tasks related to glioma imaging. A framework of post-hoc XAI methods, such as Gradient-based XAI (G-XAI) and Perturbation-based XAI (P-XAI), is introduced to evaluate deep models and explain their application in gliomas. The papers on XAI techniques in gliomas are surveyed and categorized by their specific aims such as grading, genetic biomarker detection, localization, intra-tumoral heterogeneity assessment, and survival analysis, and their XAI approach. This review highlights the growing integration of XAI in glioma imaging, demonstrating their role in bridging AI decision-making and medical diagnostics. The co-occurrence analysis emphasizes their role in enhancing model transparency and trust and guiding future research toward more reliable clinical applications. Finally, the current challenges associated with DL and XAI approaches and their clinical integration are discussed with an outlook on future opportunities from clinical users' perspectives and upcoming trends in XAI.

Enhancing stroke risk prediction through class balancing and data augmentation with CBDA-ResNet50.

Saleem MA, Javeed A, Akarathanawat W, Chutinet A, Suwanwela NC, Kaewplung P, Chaitusaney S, Benjapolakul W

pubmed logopapersJul 8 2025
Accurate prediction of stroke risk at an early stage is essential for timely intervention and prevention, especially given the serious health consequences and economic burden that strokes can cause. In this study, we proposed a class-balanced and data-augmented (CBDA-ResNet50) deep learning model to improve the prediction accuracy of the well-known ResNet50 architecture for stroke risk. Our approach uses advanced techniques such as class balancing and data augmentation to address common challenges in medical imaging datasets, such as class imbalance and limited training examples. In most cases, these problems lead to biased or less reliable predictions. To address these issues, the proposed model assures that the predictions are still accurate even when some stroke risk factors are absent in the data. The performance of CBDA-ResNet50 improves by using the Adam optimizer and the ReduceLROnPlateau scheduler to adjust the learning rate. The application of weighted cross entropy removes the imbalance between classes and significantly improves the results. It achieves an accuracy of 97.87% and a balanced accuracy of 98.27%, better than many of the previous best models. This shows that we can make more reliable predictions by combining modern deep-learning models with advanced data-processing techniques. CBDA-ResNet50 has the potential to be a model for early stroke prevention, aiming to improve patient outcomes and reduce healthcare costs.

Assessment of T2-weighted MRI-derived synthetic CT for the detection of suspected lumbar facet arthritis: a comparative analysis with conventional CT.

Cao G, Wang H, Xie S, Cai D, Guo J, Zhu J, Ye K, Wang Y, Xia J

pubmed logopapersJul 8 2025
We evaluated sCT generated from T2-weighted imaging (T2WI) using deep learning techniques to detect structural lesions in lumbar facet arthritis, with conventional CT as the reference standard. This single-center retrospective study included 40 patients who had lumbar MRI and CT with in 1 week (September 2020 to August 2021). A Pix2Pix-GAN framework generated CT images from MRI data, and image quality was assessed using structural similarity index (SSIM), mean absolute error (MAE), peak signal-to-noise ratio (PSNR), nd Dice similarity coefficient (DSC). Two senior radiologists evaluated 15 anatomical landmarks. Sensitivity, specificity, and accuracy for detecting bone erosion, osteosclerosis, and joint space alterations were analyzed for sCT, T2-weighted MRI, and conventional CT. Forty participants (21 men, 19 women) were enrolled, with a mean age of 39 ± 16.9 years. sCT showed strong agreement with conventional CT, with SSIM values of 0.888 for axial and 0.889 for sagittal views. PSNR and MAE values were 24.56 dB and 0.031 for axial and 23.75 dB and 0.038 for sagittal views, respectively. DSC values were 0.935 for axial and 0.876 for sagittal views. sCT showed excellent intra- and inter-reader reliability intraclass correlation coefficients (0.953-0.995 and 0.839-0.983, respectively). sCT had higher sensitivity (57.9% vs. 5.3%), specificity (98.8% vs. 84.6%), and accuracy (93.0% vs. 73.3%) for bone erosion than T2-weighted MRI and outperformed it for osteosclerosis and joint space changes. sCT outperformed conventional T2-weighted MRI in detecting structural lesions indicative of lumbar facet arthritis, with conventional CT as the reference standard.
Page 135 of 3473463 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.