Sort by:
Page 121 of 2052045 results

IM- LTS: An Integrated Model for Lung Tumor Segmentation using Neural Networks and IoMT.

J J, Haw SC, Palanichamy N, Ng KW, Thillaigovindhan SK

pubmed logopapersJun 1 2025
In recent days, Internet of Medical Things (IoMT) and Deep Learning (DL) techniques are broadly used in medical data processing in decision-making. A lung tumour, one of the most dangerous medical diseases, requires early diagnosis with a higher precision rate. With that concern, this work aims to develop an Integrated Model (IM- LTS) for Lung Tumor Segmentation using Neural Networks (NN) and the Internet of Medical Things (IoMT). The model integrates two architectures, MobileNetV2 and U-NET, for classifying the input lung data. The input CT lung images are pre-processed using Z-score Normalization. The semantic features of lung images are extracted based on texture, intensity, and shape to provide information to the training network.•In this work, the transfer learning technique is incorporated, and the pre-trained NN was used as an encoder for the U-NET model for segmentation. Furthermore, Support Vector Machine is used here to classify input lung data as benign and malignant.•The results are measured based on the metrics such as, specificity, sensitivity, precision, accuracy and F-Score, using the data from benchmark datasets. Compared to the existing lung tumor segmentation and classification models, the proposed model provides better results and evidence for earlier disease diagnosis.

TTGA U-Net: Two-stage two-stream graph attention U-Net for hepatic vessel connectivity enhancement.

Zhao Z, Li W, Ding X, Sun J, Xu LX

pubmed logopapersJun 1 2025
Accurate segmentation of hepatic vessels is pivotal for guiding preoperative planning in ablation surgery utilizing CT images. While non-contrast CT images often lack observable vessels, we focus on segmenting hepatic vessels within preoperative MR images. However, the vascular structures depicted in MR images are susceptible to noise, leading to challenges in connectivity. To address this issue, we propose a two-stage two-stream graph attention U-Net (i.e., TTGA U-Net) for hepatic vessel segmentation. Specifically, the first-stage network employs a CNN or Transformer-based architecture to preliminarily locate the vessel position, followed by an improved superpixel segmentation method to generate graph structures based on the positioning results. The second-stage network extracts graph node features through two parallel branches of a graph spatial attention network (GAT) and a graph channel attention network (GCT), employing self-attention mechanisms to balance these features. The graph pooling operation is utilized to aggregate node information. Moreover, we introduce a feature fusion module instead of skip connections to merge the two graph attention features, providing additional information to the decoder effectively. We establish a novel well-annotated high-quality MR image dataset for hepatic vessel segmentation and validate the vessel connectivity enhancement network's effectiveness on this dataset and the public dataset 3D IRCADB. Experimental results demonstrate that our TTGA U-Net outperforms state-of-the-art methods, notably enhancing vessel connectivity.

Managing class imbalance in the training of a large language model to predict patient selection for total knee arthroplasty: Results from the Artificial intelligence to Revolutionise the patient Care pathway in Hip and knEe aRthroplastY (ARCHERY) project.

Farrow L, Anderson L, Zhong M

pubmed logopapersJun 1 2025
This study set out to test the efficacy of different techniques used to manage to class imbalance, a type of data bias, in application of a large language model (LLM) to predict patient selection for total knee arthroplasty (TKA). This study utilised data from the Artificial Intelligence to Revolutionise the Patient Care Pathway in Hip and Knee Arthroplasty (ARCHERY) project (ISRCTN18398037). Data included the pre-operative radiology reports of patients referred to secondary care for knee-related complaints from within the North of Scotland. A clinically based LLM (GatorTron) was trained regarding prediction of selection for TKA. Three methods for managing class imbalance were assessed: a standard model, use of class weighting, and majority class undersampling. A total of 7707 individual knee radiology reports were included (dated from 2015 to 2022). The mean text length was 74 words (range 26-275). Only 910/7707 (11.8%) patients underwent TKA surgery (the designated 'minority class'). Class weighting technique performed better for minority class discrimination and calibration compared with the other two techniques (Recall 0.61/AUROC 0.73 for class weighting compared with 0.54/0.70 and 0.59/0.72 for the standard model and majority class undersampling, respectively. There was also significant data loss for majority class undersampling when compared with class-weighting. Use of class-weighting appears to provide the optimal method of training a an LLM to perform analytical tasks on free-text clinical information in the face of significant data bias ('class imbalance'). Such knowledge is an important consideration in the development of high-performance clinical AI models within Trauma and Orthopaedics.

Deep learning enabled near-isotropic CAIPIRINHA VIBE in the nephrogenic phase improves image quality and renal lesion conspicuity.

Tan Q, Miao J, Nitschke L, Nickel MD, Lerchbaumer MH, Penzkofer T, Hofbauer S, Peters R, Hamm B, Geisel D, Wagner M, Walter-Rittel TC

pubmed logopapersJun 1 2025
Deep learning (DL) accelerated controlled aliasing in parallel imaging results in higher acceleration (CAIPIRINHA)-volumetric interpolated breath-hold examination (VIBE), provides high spatial resolution T1-weighted imaging of the upper abdomen. We aimed to investigate whether DL-CAIPIRINHA-VIBE can improve image quality, vessel conspicuity, and lesion detectability compared to a standard CAIPIRINHA-VIBE in renal imaging at 3 Tesla. In this prospective study, 50 patients with 23 solid and 45 cystic renal lesions underwent MRI with clinical MR sequences, including standard CAIPIRINHA-VIBE and DL-CAIPIRINHA-VIBE sequences in the nephrographic phase at 3 Tesla. Two experienced radiologists independently evaluated both sequences and multiplanar reconstructions (MPR) of the sagittal and coronal planes for image quality with a Likert scale ranging from 1 to 5 (5 =best). Quantitative measurements including the size of the largest lesion and renal lesion contrast ratios were evaluated. DL-CAIPIRINHA-VIBE compared to standard CAIPIRINHA-VIBE showed significantly improved overall image quality, higher scores for renal border delineation, renal sinuses, vessels, adrenal glands, reduced motion artifacts and reduced perceived noise in nephrographic phase images (all p < 0.001). DL-CAIPIRINHA-VIBE with MPR showed superior lesion conspicuity and diagnostic confidence compared to standard CAIPIRINHA-VIBE. However, DL-CAIPIRINHA-VIBE presented a more synthetic appearance and more aliasing artifacts (p < 0.023). The mean size and signal intensity of renal lesions for DL-CAIPIRINHA-VIBE showed no significant differences compared to standard CAIPIRINHA-VIBE (p > 0.9). DL-CAIPIRINHA-VIBE is well suited for kidney imaging in the nephrographic phase, provides good image quality, improved delineation of anatomic structures and renal lesions.

Z-SSMNet: Zonal-aware Self-supervised Mesh Network for prostate cancer detection and diagnosis with Bi-parametric MRI.

Yuan Y, Ahn E, Feng D, Khadra M, Kim J

pubmed logopapersJun 1 2025
Bi-parametric magnetic resonance imaging (bpMRI) has become a pivotal modality in the detection and diagnosis of clinically significant prostate cancer (csPCa). Developing AI-based systems to identify csPCa using bpMRI can transform prostate cancer (PCa) management by improving efficiency and cost-effectiveness. However, current state-of-the-art methods using convolutional neural networks (CNNs) and Transformers are limited in learning in-plane and three-dimensional spatial information from anisotropic bpMRI. Their performances also depend on the availability of large, diverse, and well-annotated bpMRI datasets. To address these challenges, we propose the Zonal-aware Self-supervised Mesh Network (Z-SSMNet), which adaptively integrates multi-dimensional (2D/2.5D/3D) convolutions to learn dense intra-slice information and sparse inter-slice information of the anisotropic bpMRI in a balanced manner. We also propose a self-supervised learning (SSL) technique that effectively captures both intra-slice and inter-slice semantic information using large-scale unlabeled data. Furthermore, we constrain the network to focus on the zonal anatomical regions to improve the detection and diagnosis capability of csPCa. We conducted extensive experiments on the PI-CAI (Prostate Imaging - Cancer AI) dataset comprising 10000+ multi-center and multi-scanner data. Our Z-SSMNet excelled in both lesion-level detection (AP score of 0.633) and patient-level diagnosis (AUROC score of 0.881), securing the top position in the Open Development Phase of the PI-CAI challenge and maintained strong performance, achieving an AP score of 0.690 and an AUROC score of 0.909, and securing the second-place ranking in the Closed Testing Phase. These findings underscore the potential of AI-driven systems for csPCa diagnosis and management.

Preoperative blood and CT-image nutritional indicators in short-term outcomes and machine learning survival framework of intrahepatic cholangiocarcinoma.

Wang M, Xie X, Lin J, Shen Z, Zou E, Wang Y, Liang X, Chen G, Yu H

pubmed logopapersJun 1 2025
Intrahepatic cholangiocarcinoma (iCCA) is aggressive with limited treatment and poor prognosis. Preoperative nutritional status assessment is crucial for predicting outcomes in patients. This study aimed to compare the predictive capabilities of preoperative blood like albumin-bilirubin (ALBI), controlling nutritional status (CONUT), prognostic nutritional index (PNI) and CT-imaging nutritional indicators like skeletal muscle index (SMI), visceral adipose tissue (VAT), subcutaneous adipose tissue (SAT), visceral to subcutaneous adipose tissue ratio (VSR) in iCCA patients undergoing curative hepatectomy. 290 iCCA patients from two centers were studied. Preoperative blood and CT-imaging nutritional indicators were evaluated. Short-term outcomes like complications, early recurrence (ER) and very early recurrence (VER), and overall survival (OS) as long-term outcome were assessed. Six machine learning (ML) models, including Gradient Boosting (GB) survival analysis, were developed to predict OS. Preoperative blood nutritional indicators significantly associated with postoperative complications. CT-imaging nutritional indicators show insignificant associations with short-term outcomes. All preoperative nutritional indicators were not effective in predicting early tumor recurrence. For long-term outcomes, ALBI, CONUT, PNI, SMI, and VSR were significantly associated with OS. Six ML survival models demonstrated strong and stable performance. GB model showed the best predictive performance (C-index: 0.755 in training cohorts, 0.714 in validation cohorts). Time-dependent ROC, calibration, and decision curve analysis confirmed its clinical value. Preoperative ALBI, CONUT, and PNI scores significantly correlated with complications but not ER. Four Image Nutritional Indicators were ineffective in evaluating short-term outcomes. Six ML models were developed based on nutritional and clinicopathological variables to predict iCCA prognosis.

DKCN-Net: Deep kronecker convolutional neural network-based lung disease detection with federated learning.

Meda A, Nelson L, Jagdish M

pubmed logopapersJun 1 2025
In the healthcare field, lung disease detection techniques based on deep learning (DL) are widely used. However, achieving high stability while maintaining privacy remains a challenge. To address this, this research employs Federated Learning (FL), enabling doctors to train models without sharing patient data with unauthorized parties, preserving privacy in local models. The study introduces the Deep Kronecker Convolutional Neural Network (DKCN-Net) for lung disease detection. Input Computed Tomography (CT) images are sourced from the LIDC-IDRI database and denoised using the Adaptive Gaussian Filter (AGF). After that, the Lung lobe and nodule segmentation are performed using Deep Fuzzy Clustering (DFC) and a 3-Dimensional Fully Convolutional Neural Network (3D-FCN). During feature extraction, various features, including statistical, Convolutional Neural Networks (CNN), and Gray-Level Co-Occurrence Matrix (GLCM), are obtained. Lung diseases are then detected using DKCN-Net, which combines the Deep Kronecker Neural Network (DKN) and Parallel Convolutional Neural Network (PCNN). The DKCN-Net achieves an accuracy of 92.18 %, a loss of 7.82 %, a Mean Squared Error (MSE) of 0.858, a True Positive Rate (TPR) of 92.99 %, and a True Negative Rate (TNR) of 92.19 %, with a processing time of 50 s per timestamp.

Regional Cerebral Atrophy Contributes to Personalized Survival Prediction in Amyotrophic Lateral Sclerosis: A Multicentre, Machine Learning, Deformation-Based Morphometry Study.

Lajoie I, Kalra S, Dadar M

pubmed logopapersJun 1 2025
Accurate personalized survival prediction in amyotrophic lateral sclerosis is essential for effective patient care planning. This study investigates whether grey and white matter changes measured by magnetic resonance imaging can improve individual survival predictions. We analyzed data from 178 patients with amyotrophic lateral sclerosis and 166 healthy controls in the Canadian Amyotrophic Lateral Sclerosis Neuroimaging Consortium study. A voxel-wise linear mixed-effects model assessed disease-related and survival-related atrophy detected through deformation-based morphometry, controlling for age, sex, and scanner variations. Additional linear mixed-effects models explored associations between regional imaging and clinical measurements, and their associations with time to the composite outcome of death, tracheostomy, or permanent assisted ventilation. We evaluated whether incorporating imaging features alongside clinical data could improve the performance of an individual survival distribution model. Deformation-based morphometry uncovered distinct voxel-wise atrophy patterns linked to disease progression and survival, with many of these regional atrophies significantly associated with clinical manifestations of the disease. By integrating regional imaging features with clinical data, we observed a substantial enhancement in the performance of survival models across key metrics. Our analysis identified specific brain regions, such as the corpus callosum, rostral middle frontal gyrus, and thalamus, where atrophy predicted an increased risk of mortality. This study suggests that brain atrophy patterns measured by deformation-based morphometry provide valuable insights beyond clinical assessments for prognosis. It offers a more comprehensive approach to prognosis and highlights brain regions involved in disease progression and survival, potentially leading to a better understanding of amyotrophic lateral sclerosis. ANN NEUROL 2025;97:1144-1157.

Retaking assessment system based on the inspiratory state of chest X-ray image.

Matsubara N, Teramoto A, Takei M, Kitoh Y, Kawakami S

pubmed logopapersJun 1 2025
When taking chest X-rays, the patient is encouraged to take maximum inspiration and the radiological technologist takes the images at the appropriate time. If the image is not taken at maximum inspiration, retaking of the image is required. However, there is variation in the judgment of whether retaking is necessary between the operators. Therefore, we considered that it might be possible to reduce variation in judgment by developing a retaking assessment system that evaluates whether retaking is necessary using a convolutional neural network (CNN). To train the CNN, the input chest X-ray image and the corresponding correct label indicating whether retaking is necessary are required. However, chest X-ray images cannot distinguish whether inspiration is sufficient and does not need to be retaken, or insufficient and retaking is required. Therefore, we generated input images and labels from dynamic digital radiography (DDR) and conducted the training. Verification using 18 dynamic chest X-ray cases (5400 images) and 48 actual chest X-ray cases (96 images) showed that the VGG16-based architecture achieved an assessment accuracy of 82.3% even for actual chest X-ray images. Therefore, if the proposed method is used in hospitals, it could possibly reduce the variability in judgment between operators.

Diagnosis of Thyroid Nodule Malignancy Using Peritumoral Region and Artificial Intelligence: Results of Hand-Crafted, Deep Radiomics Features and Radiologists' Assessment in Multicenter Cohorts.

Abbasian Ardakani A, Mohammadi A, Yeong CH, Ng WL, Ng AH, Tangaraju KN, Behestani S, Mirza-Aghazadeh-Attari M, Suresh R, Acharya UR

pubmed logopapersJun 1 2025
To develop, test, and externally validate a hybrid artificial intelligence (AI) model based on hand-crafted and deep radiomics features extracted from B-mode ultrasound images in differentiating benign and malignant thyroid nodules compared to senior and junior radiologists. A total of 1602 thyroid nodules from four centers across two countries (Iran and Malaysia) were included for the development and validation of AI models. From each original and expanded contour, which included the peritumoral region, 2060 handcrafted and 1024 deep radiomics features were extracted to assess the effectiveness of the peritumoral region in the AI diagnosis profile. The performance of four algorithms, namely, support vector machine with linear (SVM_lin) and radial basis function (SVM_RBF) kernels, logistic regression, and K-nearest neighbor, was evaluated. The diagnostic performance of the proposed AI model was compared with two radiologists based on the American Thyroid Association (ATA) and the Thyroid Imaging Reporting & Data System (TI-RADS™) guidelines to show the model's applicability in clinical routines. Thirty-five hand-crafted and 36 deep radiomics features were considered for model development. In the training step, SVM_RBF and SVM_lin showed the best results when rectangular contours 40% greater than the original contours were used for both hand-crafted and deep features. Ensemble-learning with SVM_RBF and SVM_lin obtained AUC of 0.954, 0.949, 0.932, and 0.921 in internal and external validations of the Iran cohort and Malaysia cohorts 1 and 2, respectively, and outperformed both radiologists. The proposed AI model trained on nodule+the peripheral region performed optimally in external validations and outperformed the radiologists using the ATA and TI-RADS guidelines.
Page 121 of 2052045 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.