Sort by:
Page 117 of 2052045 results

Does the deep learning-based iterative reconstruction affect the measuring accuracy of bone mineral density in low-dose chest CT?

Hao H, Tong J, Xu S, Wang J, Ding N, Liu Z, Zhao W, Huang X, Li Y, Jin C, Yang J

pubmed logopapersJun 1 2025
To investigate the impacts of a deep learning-based iterative reconstruction algorithm on image quality and measuring accuracy of bone mineral density (BMD) in low-dose chest CT. Phantom and patient studies were separately conducted in this study. The same low-dose protocol was used for phantoms and patients. All images were reconstructed with filtered back projection, hybrid iterative reconstruction (HIR) (KARL®, level of 3,5,7), and deep learning-based iterative reconstruction (artificial intelligence iterative reconstruction [AIIR], low, medium, and high strength). The noise power spectrum (NPS) and the task-based transfer function (TTF) were evaluated using phantom. The accuracy and the relative error (RE) of BMD were evaluated using a European spine phantom. The subjective evaluation was performed by 2 experienced radiologists. BMD was measured using quantitative CT (QCT). Image noise, signal-to-noise ratio (SNR), contrast-to-noise ratio (CNR), BMD values, and subjective scores were compared with Wilcoxon signed-rank test. The Cohen's kappa test was used to evaluate the inter-reader and inter-group agreement. AIIR reduced noise and improved resolution on phantom images significantly. There were no significant differences among BMD values in all groups of images (all P > 0.05). RE of BMD measured using AIIR images was smaller. In objective evaluation, all strengths of AIIR achieved less image noise and higher SNR and CNR (all P < 0.05). AIIR-H showed the lowest noise and highest SNR and CNR (P < 0.05). The increase in AIIR algorithm strengths did not affect BMD values significantly (all P > 0.05). The deep learning-based iterative reconstruction did not affect the accuracy of BMD measurement in low-dose chest CT while reducing image noise and improving spatial resolution. The BMD values could be measured accurately in low-dose chest CT with deep learning-based iterative reconstruction while reducing image noise and improving spatial resolution.

A radiomics model combining machine learning and neural networks for high-accuracy prediction of cervical lymph node metastasis on ultrasound of head and neck squamous cell carcinoma.

Fukuda M, Eida S, Katayama I, Takagi Y, Sasaki M, Sumi M, Ariji Y

pubmed logopapersJun 1 2025
This study aimed to develop an ultrasound image-based radiomics model for diagnosing cervical lymph node (LN) metastasis in patients with head and neck squamous cell carcinoma (HNSCC) that shows higher accuracy than previous models. A total of 537 LN (260 metastatic and 277 nonmetastatic) from 126 patients (78 men, 48 women, average age 63 years) were enrolled. The multivariate analysis software Prediction One (Sony Network Communications Corporation) was used to create the diagnostic models. Furthermore, three machine learning methods were adopted as comparison approaches. Based on a combination of texture analysis results, clinical information, and ultrasound findings interpretated by specialists, a total of 12 models were created, three for each machine learning method, and their diagnostic performance was compared. The three best models had area under the curve of 0.98. Parameters related to ultrasound findings, such as presence of a hilum, echogenicity, and granular parenchymal echoes, showed particularly high contributions. Other significant contributors were those from texture analysis that indicated the minimum pixel value, number of contiguous pixels with the same echogenicity, and uniformity of gray levels. The radiomics model developed was able to accurately diagnose cervical LN metastasis in HNSCC.

Patent Analysis of Dental CBCT Machines.

Yeung AWK, Nalley A, Hung KF, Oenning AC, Tanaka R

pubmed logopapersJun 1 2025
Cone Beam Computed Tomography (CBCT) has become a crucial imaging tool in modern dentistry. At present, a review does not exist to provide comprehensive understanding of the technological advancements and the entities driving the innovations of CBCT. This study aimed to analyse the patent records associated with CBCT technology, to gain valuable insights into the trends and breakthroughs, and identify key areas of focus from manufacturers. The online patent database called The Lens was accessed on 3 January 2025 to identify relevant patent records. A total of 706 patent records were identified and analysed. The majority of over 700 CBCT patents was contributed by CBCT manufacturers. The United States was the jurisdiction with most patent records, followed by Europe and China. Some manufacturers hold patent records for common features of CBCT systems, such as motion artifact correction, metal artifact reduction, reconstruction of panoramic image based on 3D data, and incorporation of artificial intelligence. Patent analysis can offer valuable insights into the development and advancement of CBCT technology, and foster collaboration between manufacturers, researchers, and clinicians. The advancements in CBCT technology, as reflected by patent trends, enhance diagnostic accuracy and treatment planning. Understanding these technological innovations can aid clinicians in selecting the most effective imaging tools for patient care.

PET and CT based DenseNet outperforms advanced deep learning models for outcome prediction of oropharyngeal cancer.

Ma B, Guo J, Dijk LVV, Langendijk JA, Ooijen PMAV, Both S, Sijtsema NM

pubmed logopapersJun 1 2025
In the HECKTOR 2022 challenge set [1], several state-of-the-art (SOTA, achieving best performance) deep learning models were introduced for predicting recurrence-free period (RFP) in head and neck cancer patients using PET and CT images. This study investigates whether a conventional DenseNet architecture, with optimized numbers of layers and image-fusion strategies, could achieve comparable performance as SOTA models. The HECKTOR 2022 dataset comprises 489 oropharyngeal cancer (OPC) patients from seven distinct centers. It was randomly divided into a training set (n = 369) and an independent test set (n = 120). Furthermore, an additional dataset of 400 OPC patients, who underwent chemo(radiotherapy) at our center, was employed for external testing. Each patients' data included pre-treatment CT- and PET-scans, manually generated GTV (Gross tumour volume) contours for primary tumors and lymph nodes, and RFP information. The present study compared the performance of DenseNet against three SOTA models developed on the HECKTOR 2022 dataset. When inputting CT, PET and GTV using the early fusion (considering them as different channels of input) approach, DenseNet81 (with 81 layers) obtained an internal test C-index of 0.69, a performance metric comparable with SOTA models. Notably, the removal of GTV from the input data yielded the same internal test C-index of 0.69 while improving the external test C-index from 0.59 to 0.63. Furthermore, compared to PET-only models, when utilizing the late fusion (concatenation of extracted features) with CT and PET, DenseNet81 demonstrated superior C-index values of 0.68 and 0.66 in both internal and external test sets, while using early fusion was better in only the internal test set. The basic DenseNet architecture with 81 layers demonstrated a predictive performance on par with SOTA models featuring more intricate architectures in the internal test set, and better performance in the external test. The late fusion of CT and PET imaging data yielded superior performance in the external test.

EMI-LTI: An enhanced integrated model for lung tumor identification using Gabor filter and ROI.

J J, Haw SC, Palanichamy N, Ng KW, Aneja M, Taiyab A

pubmed logopapersJun 1 2025
In this work, the CT scans images of lung cancer patients are analysed to diagnose the disease at its early stage. The images are pre-processed using a series of steps such as the Gabor filter, contours to label the region of interest (ROI), increasing the sharpening and cropping of the image. Data augmentation is employed on the pre-processed images using two proposed architectures, namely (1) Convolutional Neural Network (CNN) and (2) Enhanced Integrated model for Lung Tumor Identification (EIM-LTI).•In this study, comparisons are made on non-pre-processed data, Haar and Gabor filters in CNN and the EIM-LTI models. The performance of the CNN and EIM-LTI models is evaluated through metrics such as precision, sensitivity, F1-score, specificity, training and validation accuracy.•The EIM-LTI model's training accuracy is 2.67 % higher than CNN, while its validation accuracy is 2.7 % higher. Additionally, the EIM-LTI model's validation loss is 0.0333 higher than CNN's.•In this study, a comparative analysis of model accuracies for lung cancer detection is performed. Cross-validation with 5 folds achieves an accuracy of 98.27 %, and the model was evaluated on unseen data and resulted in 92 % accuracy.

Brain tumor segmentation with deep learning: Current approaches and future perspectives.

Verma A, Yadav AK

pubmed logopapersJun 1 2025
Accurate brain tumor segmentation from MRI images is critical in the medical industry, directly impacts the efficacy of diagnostic and treatment plans. Accurate segmentation of tumor region can be challenging, especially when noise and abnormalities are present. This research provides a systematic review of automatic brain tumor segmentation techniques, with a specific focus on the design of network architectures. The review categorizes existing methods into unsupervised and supervised learning techniques, as well as machine learning and deep learning approaches within supervised techniques. Deep learning techniques are thoroughly reviewed, with a particular focus on CNN-based, U-Net-based, transfer learning-based, transformer-based, and hybrid transformer-based methods. This survey encompasses a broad spectrum of automatic segmentation methodologies, from traditional machine learning approaches to advanced deep learning frameworks. It provides an in-depth comparison of performance metrics, model efficiency, and robustness across multiple datasets, particularly the BraTS dataset. The study further examines multi-modal MRI imaging and its influence on segmentation accuracy, addressing domain adaptation, class imbalance, and generalization challenges. The analysis highlights the current challenges in Computer-aided Diagnostic (CAD) systems, examining how different models and imaging sequences impact performance. Recent advancements in deep learning, especially the widespread use of U-Net architectures, have significantly enhanced medical image segmentation. This review critically evaluates these developments, focusing the iterative improvements in U-Net models that have driven progress in brain tumor segmentation. Furthermore, it explores various techniques for improving U-Net performance for medical applications, focussing on its potential for improving diagnostic and treatment planning procedures. The efficiency of these automated segmentation approaches is rigorously evaluated using the BraTS dataset, a benchmark dataset, part of the annual Multimodal Brain Tumor Segmentation Challenge (MICCAI). This evaluation provides insights into the current state-of-the-art and identifies key areas for future research and development.

Beyond traditional orthopaedic data analysis: AI, multimodal models and continuous monitoring.

Oettl FC, Zsidai B, Oeding JF, Hirschmann MT, Feldt R, Tischer T, Samuelsson K

pubmed logopapersJun 1 2025
Multimodal artificial intelligence (AI) has the potential to revolutionise healthcare by enabling the simultaneous processing and integration of various data types, including medical imaging, electronic health records, genomic information and real-time data. This review explores the current applications and future potential of multimodal AI across healthcare, with a particular focus on orthopaedic surgery. In presurgical planning, multimodal AI has demonstrated significant improvements in diagnostic accuracy and risk prediction, with studies reporting an Area under the receiving operator curve presenting good to excellent performance across various orthopaedic conditions. Intraoperative applications leverage advanced imaging and tracking technologies to enhance surgical precision, while postoperative care has been advanced through continuous patient monitoring and early detection of complications. Despite these advances, significant challenges remain in data integration, standardisation, and privacy protection. Technical solutions such as federated learning (allowing decentralisation of models) and edge computing (allowing data analysis to happen on site or closer to site instead of multipurpose datacenters) are being developed to address these concerns while maintaining compliance with regulatory frameworks. As this field continues to evolve, the integration of multimodal AI promises to advance personalised medicine, improve patient outcomes, and transform healthcare delivery through more comprehensive and nuanced analysis of patient data. Level of Evidence: Level V.

A scoping review on the integration of artificial intelligence in point-of-care ultrasound: Current clinical applications.

Kim J, Maranna S, Watson C, Parange N

pubmed logopapersJun 1 2025
Artificial intelligence (AI) is used increasingly in point-of-care ultrasound (POCUS). However, the true role, utility, advantages, and limitations of AI tools in POCUS have been poorly understood. to conduct a scoping review on the current literature of AI in POCUS to identify (1) how AI is being applied in POCUS, and (2) how AI in POCUS could be utilized in clinical settings. The review followed the JBI scoping review methodology. A search strategy was conducted in Medline, Embase, Emcare, Scopus, Web of Science, Google Scholar, and AI POCUS manufacturer websites. Selection criteria, evidence screening, and selection were performed in Covidence. Data extraction and analysis were performed on Microsoft Excel by the primary investigator and confirmed by the secondary investigators. Thirty-three papers were included. AI POCUS on the cardiopulmonary region was the most prominent in the literature. AI was most frequently used to automatically measure biometry using POCUS images. AI POCUS was most used in acute settings. However, novel applications in non-acute and low-resource settings were also explored. AI had the potential to increase POCUS accessibility and usability, expedited care and management, and had a reasonably high diagnostic accuracy in limited applications such as measurement of Left Ventricular Ejection Fraction, Inferior Vena Cava Collapsibility Index, Left-Ventricular Outflow Tract Velocity Time Integral and identifying B-lines of the lung. However, AI could not interpret poor images, underperformed compared to standard-of-care diagnostic methods, and was less effective in patients with specific disease states, such as severe illnesses that limit POCUS image acquisition. This review uncovered the applications of AI in POCUS and the advantages and limitations of AI POCUS in different clinical settings. Future research in the field must first establish the diagnostic accuracy of AI POCUS tools and explore their clinical utility through clinical trials.

Estimating patient-specific organ doses from head and abdominal CT scans via machine learning with optimized regulation strength and feature quantity.

Shao W, Qu L, Lin X, Yun W, Huang Y, Zhuo W, Liu H

pubmed logopapersJun 1 2025
This study aims to investigate estimation of patient-specific organ doses from CT scans via radiomics feature-based SVR models with training parameter optimization, and maximize SVR models' predictive accuracy and robustness via fine-tuning regularization parameter and input feature quantities. CT images from head and abdominal scans underwent processing using DeepViewer®, an auto-segmentation tool for defining regions of interest (ROIs) of their organs. Radiomics features were extracted from the CT data and ROIs. Benchmark organ doses were then calculated through Monte Carlo (MC) simulations. SVR models, utilizing these extracted radiomics features as inputs for model training, were employed to predict patient-specific organ doses from CT scans. The trained SVR models underwent optimization by adjusting parameters for the input radiomics feature quantity and regulation parameter, resulting in appropriate configurations for accurate patient-specific organ dose predictions. The C values of 5 and 10 have made the SVR models arrive at a saturation state for the head and abdominal organs. The SVR models' MAPE and R<sup>2</sup> strongly depend on organ types. The appropriate parameters respectively are C = 5 or 10 coupled with input feature quantities of 50 for the brain and 200 for the left eye, right eye, left lens, and right lens. the appropriate parameters would be C = 5 or 10 accompanying input feature quantities of 80 for the bowel, 50 for the left kidney, right kidney, and 100 for the liver. Performance optimization of selecting appropriate combinations of input feature quantity and regulation parameters can maximize the predictive accuracy and robustness of radiomics feature-based SVR models in the realm of patient-specific organ dose predictions from CT scans.

The impact of training image quality with a novel protocol on artificial intelligence-based LGE-MRI image segmentation for potential atrial fibrillation management.

Berezhnoy AK, Kalinin AS, Parshin DA, Selivanov AS, Demin AG, Zubov AG, Shaidullina RS, Aitova AA, Slotvitsky MM, Kalemberg AA, Kirillova VS, Syrovnev VA, Agladze KI, Tsvelaya VA

pubmed logopapersJun 1 2025
Atrial fibrillation (AF) is the most common cardiac arrhythmia, affecting up to 2 % of the population. Catheter ablation is a promising treatment for AF, particularly for paroxysmal AF patients, but it often has high recurrence rates. Developing in silico models of patients' atria during the ablation procedure using cardiac MRI data may help reduce these rates. This study aims to develop an effective automated deep learning-based segmentation pipeline by compiling a specialized dataset and employing standardized labeling protocols to improve segmentation accuracy and efficiency. In doing so, we aim to achieve the highest possible accuracy and generalization ability while minimizing the burden on clinicians involved in manual data segmentation. We collected LGE-MRI data from VMRC and the cDEMRIS database. Two specialists manually labeled the data using standardized protocols to reduce subjective errors. Neural network (nnU-Net and smpU-Net++) performance was evaluated using statistical tests, including sensitivity and specificity analysis. A new database of LGE-MRI images, based on manual segmentation, was created (VMRC). Our approach with consistent labeling protocols achieved a Dice coefficient of 92.4 % ± 0.8 % for the cavity and 64.5 % ± 1.9 % for LA walls. Using the pre-trained RIFE model, we attained a Dice score of approximately 89.1 % ± 1.6 % for atrial LGE-MRI imputation, outperforming classical methods. Sensitivity and specificity values demonstrated substantial enhancement in the performance of neural networks trained with the new protocol. Standardized labeling and RIFE applications significantly improved machine learning tool efficiency for constructing 3D LA models. This novel approach supports integrating state-of-the-art machine learning methods into broader in silico pipelines for predicting ablation outcomes in AF patients.
Page 117 of 2052045 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.