Sort by:
Page 198 of 2052045 results

Real-time brain tumour diagnoses using a novel lightweight deep learning model.

Alnageeb MHO, M H S

pubmed logopapersMay 6 2025
Brain tumours continue to be a primary cause of worldwide death, highlighting the critical need for effective and accurate diagnostic tools. This article presents MK-YOLOv8, an innovative lightweight deep learning framework developed for the real-time detection and categorization of brain tumours from MRI images. Based on the YOLOv8 architecture, the proposed model incorporates Ghost Convolution, the C3Ghost module, and the SPPELAN module to improve feature extraction and substantially decrease computational complexity. An x-small object detection layer has been added, supporting precise detection of small and x-small tumours, which is crucial for early diagnosis. Trained on the Figshare Brain Tumour (FBT) dataset comprising (3,064) MRI images, MK-YOLOv8 achieved a mean Average Precision (mAP) of 99.1% at IoU (0.50) and 88.4% at IoU (0.50-0.95), outperforming YOLOv8 (98% and 78.8%, respectively). Glioma recall improved by 26%, underscoring the enhanced sensitivity to challenging tumour types. With a computational footprint of only 96.9 GFLOPs (representing 37.5% of YOYOLOv8x'sFLOPs) and utilizing 12.6 million parameters, a mere 18.5% of YOYOLOv8's parameters, MK-YOLOv8 delivers high efficiency with reduced resource demands. Also, it trained on the Br35H dataset (801 images) to guarantee the model's robustness and generalization; it achieved a mAP of 98.6% at IoU (0.50). The suggested model operates at 62 frames per second (FPS) and is suited for real-time clinical processes. These developments establish MK-YOLOv8 as an innovative framework, overcoming challenges in tiny tumour identification and providing a generalizable, adaptable, and precise detection approach for brain tumour diagnostics in clinical settings.

Keypoint localization and parameter measurement in ultrasound biomicroscopy anterior segment images based on deep learning.

Qinghao M, Sheng Z, Jun Y, Xiaochun W, Min Z

pubmed logopapersMay 6 2025
Accurate measurement of anterior segment parameters is crucial for diagnosing and managing ophthalmic conditions, such as glaucoma, cataracts, and refractive errors. However, traditional clinical measurement methods are often time-consuming, labor-intensive, and susceptible to inaccuracies. With the growing potential of artificial intelligence in ophthalmic diagnostics, this study aims to develop and evaluate a deep learning model capable of automatically extracting key points and precisely measuring multiple clinically significant anterior segment parameters from ultrasound biomicroscopy (UBM) images. These parameters include central corneal thickness (CCT), anterior chamber depth (ACD), pupil diameter (PD), angle-to-angle distance (ATA), sulcus-to-sulcus distance (STS), lens thickness (LT), and crystalline lens rise (CLR). A data set of 716 UBM anterior segment images was collected from Tianjin Medical University Eye Hospital. YOLOv8 was utilized to segment four key anatomical structures: cornea-sclera, anterior chamber, pupil, and iris-ciliary body-thereby enhancing the accuracy of keypoint localization. Only images with intact posterior capsule lentis were selected to create an effective data set for parameter measurement. Ten keypoints were localized across the data set, allowing the calculation of seven essential parameters. Control experiments were conducted to evaluate the impact of segmentation on measurement accuracy, with model predictions compared against clinical gold standards. The segmentation model achieved a mean IoU of 0.8836 and mPA of 0.9795. Following segmentation, the binary classification model attained an mAP of 0.9719, with a precision of 0.9260 and a recall of 0.9615. Keypoint localization exhibited a Euclidean distance error of 58.73 ± 63.04 μm, improving from the pre-segmentation error of 71.57 ± 67.36 μm. Localization mAP was 0.9826, with a precision of 0.9699, a recall of 0.9642 and an FPS of 32.64. In addition, parameter error analysis and Bland-Altman plots demonstrated improved agreement with clinical gold standards after segmentation. This deep learning approach for UBM image segmentation, keypoint localization, and parameter measurement is feasible, enhancing clinical diagnostic efficiency for anterior segment parameters.

Stacking classifiers based on integrated machine learning model: fusion of CT radiomics and clinical biomarkers to predict lymph node metastasis in locally advanced gastric cancer patients after neoadjuvant chemotherapy.

Ling T, Zuo Z, Huang M, Ma J, Wu L

pubmed logopapersMay 6 2025
The early prediction of lymph node positivity (LN+) after neoadjuvant chemotherapy (NAC) is crucial for optimizing individualized treatment strategies. This study aimed to integrate radiomic features and clinical biomarkers through machine learning (ML) approaches to enhance prediction accuracy by focusing on patients with locally advanced gastric cancer (LAGC). We retrospectively enrolled 277 patients with LAGC and randomly divided them into training (n = 193) and validation (n = 84) sets at a 7:3 ratio. In total, 1,130 radiomics features were extracted from pre-treatment portal venous phase computed tomography scans. These features were linearly combined to develop a radiomics score (rad score) through feature engineering. Then, using the rad score and clinical biomarkers as input features, we applied simple statistical strategies (relying on a single ML model) and integrated statistical strategies (including classification model integration techniques, such as hard voting, soft voting, and stacking) to predict LN+ post-NAC. The diagnostic performance of the model was assessed using receiver operating characteristic curves with corresponding areas under the curve (AUC). Of all ML models, the stacking classifier, an integrated statistical strategy, exhibited the best performance, achieving an AUC of 0.859 for predicting LN+ in patients with LAGC. This predictive model was transformed into a publicly available online risk calculator. We developed a stacking classifier that integrates radiomics and clinical biomarkers to predict LN+ in patients with LAGC undergoing surgical resection, providing personalized treatment insights.

Deep Learning-Based CT-Less Cardiac Segmentation of PET Images: A Robust Methodology for Multi-Tracer Nuclear Cardiovascular Imaging.

Salimi Y, Mansouri Z, Nkoulou R, Mainta I, Zaidi H

pubmed logopapersMay 6 2025
Quantitative cardiovascular PET/CT imaging is useful in the diagnosis of multiple cardiac perfusion and motion pathologies. The common approach for cardiac segmentation consists in using co-registered CT images, exploiting publicly available deep learning (DL)-based segmentation models. However, the mismatch between structural CT images and PET uptake limits the usefulness of these approaches. Besides, the performance of DL models is not consistent over low-dose or ultra-low-dose CT images commonly used in clinical PET/CT imaging. In this work, we developed a DL-based methodology to tackle this issue by segmenting directly cardiac PET images. This study included 406 cardiac PET images from 146 patients (43 <sup>18</sup>F-FDG, 329 <sup>13</sup>N-NH<sub>3</sub>, and 37 <sup>82</sup>Rb images). Using previously trained DL nnU-Net models in our group, we segmented the whole heart and the three main cardiac components, namely the left myocardium (LM), left ventricle cavity (LV), and right ventricle (RV) on co-registered CT images. The segmentation was resampled to PET resolution and edited through a combination of automated image processing and manual correction. The corrected segmentation masks and SUV PET images were fed to a nnU-Net V2 pipeline to be trained in fivefold data split strategy by defining two tasks: task #1 for whole cardiac segmentation and task #2 for segmentation of three cardiac components. Fifteen cardiac images were used as external validation set. The DL delineated masks were compared with standard of reference masks using Dice coefficient, Jaccard distance, mean surface distance, and segment volume relative error (%). Task #1 average Dice coefficient in internal validation fivefold was 0.932 ± 0.033. The average Dice on the 15 external cases were comparable with the fivefold Dice reaching an average of 0.941 ± 0.018. Task #2 average Dice in fivefold validation was 0.88 ± 0.063, 0.828 ± 0.091, and 0.876 ± 0.062 for LM, LV, and RV, respectively. There was no statistically significant difference among the Dice coefficients, neither between images acquired by three radiotracers nor between the different folds (P-values >  > 0.05). The overall average volume prediction error in cardiac components segmentation was less than 2%. We developed an automated DL-based segmentation pipeline to segment the whole heart and cardiac components with acceptable accuracy and robust performance in the external test set and over three radiotracers used in nuclear cardiovascular imaging. The proposed methodology can overcome unreliable segmentations performed on CT images.

Deep Learning for Classification of Solid Renal Parenchymal Tumors Using Contrast-Enhanced Ultrasound.

Bai Y, An ZC, Du LF, Li F, Cai YY

pubmed logopapersMay 6 2025
The purpose of this study is to assess the ability of deep learning models to classify different subtypes of solid renal parenchymal tumors using contrast-enhanced ultrasound (CEUS) images and to compare their classification performance. A retrospective study was conducted using CEUS images of 237 kidney tumors, including 46 angiomyolipomas (AML), 118 clear cell renal cell carcinomas (ccRCC), 48 papillary RCCs (pRCC), and 25 chromophobe RCCs (chRCC), collected from January 2017 to December 2019. Two deep learning models, based on the ResNet-18 and RepVGG architectures, were trained and validated to distinguish between these subtypes. The models' performance was assessed using sensitivity, specificity, positive predictive value, negative predictive value, F1 score, Matthews correlation coefficient, accuracy, area under the receiver operating characteristic curve (AUC), and confusion matrix analysis. Class activation mapping (CAM) was applied to visualize the specific regions that contributed to the models' predictions. The ResNet-18 and RepVGG-A0 models achieved an overall accuracy of 76.7% and 84.5% across all four subtypes. The AUCs for AML, ccRCC, pRCC, and chRCC were 0.832, 0.829, 0.806, and 0.795 for the ResNet-18 model, compared to 0.906, 0.911, 0.840, and 0.827 for the RepVGG-A0 model, respectively. The deep learning models could reliably differentiate between various histological subtypes of renal tumors using CEUS images in an objective and non-invasive manner.

Enhancing Breast Cancer Detection Through Optimized Thermal Image Analysis Using PRMS-Net Deep Learning Approach.

Khan M, Su'ud MM, Alam MM, Karimullah S, Shaik F, Subhan F

pubmed logopapersMay 6 2025
Breast cancer has remained one of the most frequent and life-threatening cancers in females globally, putting emphasis on better diagnostics in its early stages to solve the problem of therapy effectiveness and survival. This work enhances the assessment of breast cancer by employing progressive residual networks (PRN) and ResNet-50 within the framework of Progressive Residual Multi-Class Support Vector Machine-Net. Built on concepts of deep learning, this creative integration optimizes feature extraction and raises the bar for classification effectiveness, earning an almost perfect 99.63% on our tests. These findings indicate that PRMS-Net can serve as an efficient and reliable diagnostic tool for early breast cancer detection, aiding radiologists in improving diagnostic accuracy and reducing false positives. The separation of the data into different segments is possible to determine the architecture's reliability using the fivefold cross-validation approach. The total variability of precision, recall, and F1 scores clearly depicted in the box plot also endorse the competency of the model for marking proper sensitivity and specificity-highly required for combating false positive and false negative cases in real clinical practice. The evaluation of error distribution strengthens the model's rationale by giving validation of practical application in medical contexts of image processing. The high levels of feature extraction sensitivity together with highly sophisticated classification methods make PRMS-Net a powerful tool that can be used in improving the early detection of breast cancer and subsequent patient prognosis.

Corticospinal tract reconstruction with tumor by using a novel direction filter based tractography method.

Zeng Q, Xia Z, Huang J, Xie L, Zhang J, Huang S, Xing Z, Zhuge Q, Feng Y

pubmed logopapersMay 6 2025
The corticospinal tract (CST) is the primary neural pathway responsible for voluntary motor functions, and preoperative CST reconstruction is crucial for preserving nerve functions during neurosurgery. Diffusion magnetic resonance imaging-based tractography is the only noninvasive method to preoperatively reconstruct CST in clinical practice. However, for the largesize bundle CST with complex fiber geometry (fanning fibers), reconstructing its full extent remains challenging with local-derived methods without incorporating global information. Especially in the presence of tumors, the mass effect and partial volume effect cause abnormal diffusion signals. In this work, a CST reconstruction tractography method based on a novel direction filter was proposed, designed to ensure robust CST reconstruction in the clinical dataset with tumors. A direction filter based on a fourth-order differential equation was introduced for global direction estimation. By considering the spatial consistency and leveraging anatomical prior knowledge, the direction filter was computed by minimizing the energy between the target directions and initial fiber directions. On the basis of the new directions corresponding to CST obtained by the direction filter, the fiber tracking method was implemented to reconstruct the fiber trajectory. Additionally, a deep learning-based method along with tractography template prior information was employed to generate the regions of interest (ROIs) and initial fiber directions. Experimental results showed that the proposed method yields higher valid connections and lower no connections and exhibits the fewest broken fibers and short-connected fibers. The proposed method offers an effective tool to enhance CST-related surgical outcomes by optimizing tumor resection and preserving CST.

Artificial intelligence applications for the diagnosis of pulmonary nodules.

Ost DE

pubmed logopapersMay 6 2025
This review evaluates the role of artificial intelligence (AI) in diagnosing solitary pulmonary nodules (SPNs), focusing on clinical applications and limitations in pulmonary medicine. It explores AI's utility in imaging and blood/tissue-based diagnostics, emphasizing practical challenges over technical details of deep learning methods. AI enhances computed tomography (CT)-based computer-aided diagnosis (CAD) through steps like nodule detection, false positive reduction, segmentation, and classification, leveraging convolutional neural networks and machine learning. Segmentation achieves Dice similarity coefficients of 0.70-0.92, while malignancy classification yields areas under the curve of 0.86-0.97. AI-driven blood tests, incorporating RNA sequencing and clinical data, report AUCs up to 0.907 for distinguishing benign from malignant nodules. However, most models lack prospective, multiinstitutional validation, risking overfitting and limited generalizability. The "black box" nature of AI, coupled with overlapping inputs (e.g., nodule size, smoking history) with physician assessments, complicates integration into clinical workflows and precludes standard Bayesian analysis. AI shows promise for SPN diagnosis but requires rigorous validation in diverse populations and better clinician training for effective use. Rather than replacing judgment, AI should serve as a second opinion, with its reported performance metrics understood as study-specific, not directly applicable at the bedside due to double-counting issues.

V3DQutrit a volumetric medical image segmentation based on 3D qutrit optimized modified tensor ring model.

Verma P, Kumar H, Shukla DK, Satpathy S, Alsekait DM, Khalaf OI, Alzoubi A, Alqadi BS, AbdElminaam DS, Kushwaha A, Singh J

pubmed logopapersMay 6 2025
This paper introduces 3D-QTRNet, a novel quantum-inspired neural network for volumetric medical image segmentation. Unlike conventional CNNs, which suffer from slow convergence and high complexity, and QINNs, which are limited to grayscale segmentation, our approach leverages qutrit encoding and tensor ring decomposition. These techniques improve segmentation accuracy, optimize memory usage, and accelerate model convergence. The proposed model demonstrates superior performance on the BRATS19 and Spleen datasets, outperforming state-of-the-art CNN and quantum models in terms of Dice similarity and segmentation precision. This work bridges the gap between quantum computing and medical imaging, offering a scalable solution for real-world applications.

Diagnosis of Sarcopenia Using Convolutional Neural Network Models Based on Muscle Ultrasound Images: Prospective Multicenter Study.

Chen ZT, Li XL, Jin FS, Shi YL, Zhang L, Yin HH, Zhu YL, Tang XY, Lin XY, Lu BL, Wang Q, Sun LP, Zhu XX, Qiu L, Xu HX, Guo LH

pubmed logopapersMay 6 2025
Early detection is clinically crucial for the strategic handling of sarcopenia, yet the screening process, which includes assessments of muscle mass, strength, and function, remains complex and difficult to access. This study aims to develop a convolutional neural network model based on ultrasound images to simplify the diagnostic process and promote its accessibility. This study prospectively evaluated 357 participants (101 with sarcopenia and 256 without sarcopenia) for training, encompassing three types of data: muscle ultrasound images, clinical information, and laboratory information. Three monomodal models based on each data type were developed in the training cohort. The data type with the best diagnostic performance was selected to develop the bimodal and multimodal model by adding another one or two data types. Subsequently, the diagnostic performance of the above models was compared. The contribution ratios of different data types were further analyzed for the multimodal model. A sensitivity analysis was performed by excluding 86 cases with missing values and retaining 271 complete cases for robustness validation. By comprehensive comparison, we finally identified the optimal model (SARCO model) as the convenient solution. Moreover, the SARCO model underwent an external validation with 145 participants (68 with sarcopenia and 77 without sarcopenia) and a proof-of-concept validation with 82 participants (19 with sarcopenia and 63 without sarcopenia) from two other hospitals. The monomodal model based on ultrasound images achieved the highest area under the receiver operator characteristic curve (AUC) of 0.827 and F1-score of 0.738 among the three monomodal models. Sensitivity analysis on complete data further confirmed the superiority of the ultrasound images model (AUC: 0.851; F1-score: 0.698). The performance of the multimodal model demonstrated statistical differences compared to the best monomodal model (AUC: 0.845 vs 0.827; P=.02) as well as the two bimodal models based on ultrasound images+clinical information (AUC: 0.845 vs 0.826; P=.03) and ultrasound images+laboratory information (AUC: 0.845 vs 0.832, P=0.035). On the other hand, ultrasound images contributed the most evidence for diagnosing sarcopenia (0.787) and nonsarcopenia (0.823) in the multimodal models. Sensitivity analysis showed consistent performance trends, with ultrasound images remaining the dominant contributor (Shapley additive explanation values: 0.810 for sarcopenia and 0.795 for nonsarcopenia). After comprehensive clinical analysis, the monomodal model based on ultrasound images was identified as the SARCO model. Subsequently, the SARCO model achieved satisfactory prediction performance in the external validation and proof-of-concept validation, with AUCs of 0.801 and 0.757 and F1-scores of 0.727 and 0.666, respectively. All three types of data contributed to sarcopenia diagnosis, while ultrasound images played a dominant role in model decision-making. The SARCO model based on ultrasound images is potentially the most convenient solution for diagnosing sarcopenia. Chinese Clinical Trial Registry ChiCTR2300073651; https://www.chictr.org.cn/showproj.html?proj=199199.
Page 198 of 2052045 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.