Sort by:
Page 43 of 2222215 results

Anterior cruciate ligament tear detection based on Res2Net modified by improved Lévy flight distribution.

Yang P, Liu Y, Liu F, Han M, Abdi Y

pubmed logopapersJul 1 2025
Anterior Cruciate Ligament (ACL) tears are common in sports and can provide noteworthy health issues. Therefore, accurately diagnosing of tears is important for the early and proper treatment. However, traditional diagnostic methods, such as clinical assessments and MRI, have limitations in terms of accuracy and efficiency. This study introduces a new diagnostic approach by combining of the deep learning architecture Res2Net with an improved version of the Lévy flight distribution (ILFD) to improve the detection of ACL tears in knee MRI images. The Res2Net model is known for its ability to extract important features and classify them effectively. By optimizing the model using the ILFD algorithm, the diagnostic efficiency is greatly improved. For validation of the proposed model's efficiency, it has been applied into two standard datasets including Stanford University Medical Center and Clinical Hospital Centre Rijeka. Comparative analysis with existing diagnostic methods, including 14 layers ResNet-14, Compact Parallel Deep Convolutional Neural Network (CPDCNN), Convolutional Neural Network (CNN), Generative Adversarial Network (GAN), and combined CNN and Modified Golden Search Algorithm (CNN/MGSA) shows that the suggested Res2Net/ILFD model performs better in various metrics, including precision, recall, accuracy, f1-score, and specificity, and Matthews correlation coefficient.

Ultrasound-based classification of follicular thyroid Cancer using deep convolutional neural networks with transfer learning.

Agyekum EA, Yuzhi Z, Fang Y, Agyekum DN, Wang X, Issaka E, Li C, Shen X, Qian X, Wu X

pubmed logopapersJul 1 2025
This study aimed to develop and validate convolutional neural network (CNN) models for distinguishing follicular thyroid carcinoma (FTC) from follicular thyroid adenoma (FTA). Additionally, this current study compared the performance of CNN models with the American College of Radiology Thyroid Imaging Reporting and Data System (ACR-TIRADS) and Chinese Thyroid Imaging Reporting and Data System (C-TIRADS) ultrasound-based malignancy risk stratification systems. A total of 327 eligible patients with FTC and FTA who underwent preoperative thyroid ultrasound examination were retrospectively enrolled between August 2017, and August 2024. Patients were randomly assigned to a training cohort (n = 263) and a test cohort (n = 64) in an 8:2 ratio using stratified sampling. Five CNN models, including VGG16, ResNet101, MobileNetV2, ResNet152, and ResNet50, pre-trained with ImageNet, were developed and tested to distinguish FTC from FTA. The CNN models exhibited good performance, yielding areas under the receiver operating characteristic curve (AUC) ranging from 0.64 to 0.77. The ResNet152 model demonstrated the highest AUC (0.77; 95% CI, 0.67-0.87) for distinguishing between FTC and FTA. Decision curve and calibration curve analyses demonstrated the models' favorable clinical value and calibration. Furthermore, when comparing the performance of the developed models with that of the C-TIRADS and ACR-TIRADS systems, the models developed in this study demonstrated superior performance. This can potentially guide appropriate management of FTC in patients with follicular neoplasms.

Enhancing ultrasonographic detection of hepatocellular carcinoma with artificial intelligence: current applications, challenges and future directions.

Wongsuwan J, Tubtawee T, Nirattisaikul S, Danpanichkul P, Cheungpasitporn W, Chaichulee S, Kaewdech A

pubmed logopapersJul 1 2025
Hepatocellular carcinoma (HCC) remains a leading cause of cancer-related mortality worldwide, with early detection playing a crucial role in improving survival rates. Artificial intelligence (AI), particularly in medical image analysis, has emerged as a potential tool for HCC diagnosis and surveillance. Recent advancements in deep learning-driven medical imaging have demonstrated significant potential in enhancing early HCC detection, particularly in ultrasound (US)-based surveillance. This review provides a comprehensive analysis of the current landscape, challenges, and future directions of AI in HCC surveillance, with a specific focus on the application in US imaging. Additionally, it explores AI's transformative potential in clinical practice and its implications for improving patient outcomes. We examine various AI models developed for HCC diagnosis, highlighting their strengths and limitations, with a particular emphasis on deep learning approaches. Among these, convolutional neural networks have shown notable success in detecting and characterising different focal liver lesions on B-mode US often outperforming conventional radiological assessments. Despite these advancements, several challenges hinder AI integration into clinical practice, including data heterogeneity, a lack of standardisation, concerns regarding model interpretability, regulatory constraints, and barriers to real-world clinical adoption. Addressing these issues necessitates the development of large, diverse, and high-quality data sets to enhance the robustness and generalisability of AI models. Emerging trends in AI for HCC surveillance, such as multimodal integration, explainable AI, and real-time diagnostics, offer promising advancements. These innovations have the potential to significantly improve the accuracy, efficiency, and clinical applicability of AI-driven HCC surveillance, ultimately contributing to enhanced patient outcomes.

ARTIFICIAL INTELLIGENCE ENHANCES DIAGNOSTIC ACCURACY OF CONTRAST ENEMAS IN HIRSCHSPRUNG DISEASE COMPARED TO CLINICAL EXPERTS.

Vargova P, Varga M, Izquierdo Hernandez B, Gutierrez Alonso C, Gonzalez Esgueda A, Cobos Hernandez MV, Fernandez R, González-Ruiz Y, Bragagnini Rodriguez P, Del Peral Samaniego M, Corona Bellostas C

pubmed logopapersJul 1 2025
Introduction Contrast enema (CE) is widely used in the evaluation of suspected Hirschsprung disease (HD). Deep learning is a promising tool to standardize image assessment and support clinical decision-making. This study assesses the diagnostic performance of a deep neural network (DNN), with and without clinical data, and compares its interpretation with that of pediatric surgeons and radiologists. Materials and Methods In this retrospective study, 1471 contrast enema images from patients <15 years were analysed, with 218 images used for testing. A deep neural network, pediatric radiologists, and surgeons independently reviewed the testing set, with and without clinical data. Diagnostic performance was assessed using ROC and PR curves, and interobserver agreement was evaluated using Fleiss' kappa. Results The deep neural network achieved high diagnostic accuracy (AUC-ROC = 0.87) in contrast enema interpretation, with improved performance when combining anteroposterior and lateral images (AUC-ROC = 0.92). Clinical data integration further enhanced model sensitivity and negative predictive value. The super-surgeon (majority voting of colorectal surgeons) outperformed most individual clinicians (sensitivity 81.8%, specificity 79.1%), while the super-radiologist (majority voting of radiologist) showed moderate accuracy. Interobserver analysis revealed strong agreement between the model and surgeons (Cohen's kappa = 0.73), and overall consistency among experts and the model (Fleiss' kappa = 0.62). Conclusions AI-assisted CE interpretation achieved higher specificity and comparable sensitivity to those of the clinicians. Its consistent performance and substantial agreement with experts support its potential role in improving CE assessment in HD.

Dual-threshold sample selection with latent tendency difference for label-noise-robust pneumoconiosis staging.

Zhang S, Ren X, Qiang Y, Zhao J, Qiao Y, Yue H

pubmed logopapersJul 1 2025
BackgroundThe precise pneumoconiosis staging suffers from progressive pair label noise (PPLN) in chest X-ray datasets, because adjacent stages are confused due to unidentifialble and diffuse opacities in the lung fields. As deep neural networks are employed to aid the disease staging, the performance is degraded under such label noise.ObjectiveThis study improves the effectiveness of pneumoconiosis staging by mitigating the impact of PPLN through network architecture refinement and sample selection mechanism adjustment.MethodsWe propose a novel multi-branch architecture that incorporates the dual-threshold sample selection. Several auxiliary branches are integrated in a two-phase module to learn and predict the <i>progressive feature tendency</i>. A novel difference-based metric is introduced to iteratively obtained the instance-specific thresholds as a complementary criterion of dynamic sample selection. All the samples are finally partitioned into <i>clean</i> and <i>hard</i> sets according to dual-threshold criteria and treated differently by loss functions with penalty terms.ResultsCompared with the state-of-the-art, the proposed method obtains the best metrics (accuracy: 90.92%, precision: 84.25%, sensitivity: 81.11%, F1-score: 82.06%, and AUC: 94.64%) under real-world PPLN, and is less sensitive to the rise of synthetic PPLN rate. An ablation study validates the respective contributions of critical modules and demonstrates how variations of essential hyperparameters affect model performance.ConclusionsThe proposed method achieves substantial effectiveness and robustness against PPLN in pneumoconiosis dataset, and can further assist physicians in diagnosing the disease with a higher accuracy and confidence.

CT-free attenuation and Monte-Carlo based scatter correction-guided quantitative <sup>90</sup>Y-SPECT imaging for improved dose calculation using deep learning.

Mansouri Z, Salimi Y, Wolf NB, Mainta I, Zaidi H

pubmed logopapersJul 1 2025
This work aimed to develop deep learning (DL) models for CT-free attenuation and Monte Carlo-based scatter correction (AC, SC) in quantitative <sup>90</sup>Y SPECT imaging for improved dose calculation. Data of 190 patients who underwent <sup>90</sup>Y selective internal radiation therapy (SIRT) with glass microspheres was studied. Voxel-level dosimetry was performed on uncorrected and corrected SPECT images using the local energy deposition method. Three deep learning models were trained individually for AC, SC, and joint ASC using a modified 3D shifted-window UNet Transformer (Swin UNETR) architecture. Corrected and unorrected dose maps served as reference and as inputs, respectively. The data was split into train set (~ 80%) and unseen test set (~ 20%). Training was conducted in a five-fold cross-validation scheme. The trained models were tested on the unseen test set. The model's performance was thoroughly evaluated by comparing organ- and voxel-level dosimetry results between the reference and DL-generated dose maps on the unseen test dataset. The voxel and organ-level evaluations also included Gamma analysis with three different distances to agreement (DTA (mm)) and dose difference (DD (%)) criteria to explore suitable criteria in SIRT dosimetry using SPECT. The average ± SD of the voxel-level quantitative metrics for AC task, are mean error (ME (Gy)): -0.026 ± 0.06, structural similarity index (SSIM (%)): 99.5 ± 0.25, and peak signal to noise ratio (PSNR (dB)): 47.28 ± 3.31. These values for SC task are - 0.014 ± 0.05, 99.88 ± 0.099, 55.9 ± 4, respectively. For ASC task, these values are as follows: -0.04 ± 0.06, 99.57 ± 0.33, 47.97 ± 3.6, respectively. The results of voxel level gamma evaluations with three different criteria, namely "DTA: 4.79, DD: 1%", "DTA:10 mm, DD: 5%", and "DTA: 15 mm, DD:10%" were around 98%. The mean absolute error (MAE (Gy)) for tumor and whole normal liver across tasks are as follows: 7.22 ± 5.9 and 1.09 ± 0.86 for AC, 8 ± 9.3 and 0.9 ± 0.8 for SC, and 11.8 ± 12.02 and 1.3 ± 0.98 for ASC, respectively. We developed multiple models for three different clinically scenarios, namely AC, SC, and ASC using the patient-specific Monte Carlo scatter corrected and CT-based attenuation corrected images. These task-specific models could be beneficial to perform the essential corrections where the CT images are either not available or not reliable due to misalignment, after training with a larger dataset.

Cerebrovascular morphology: Insights into normal variations, aging effects and disease implications.

Deshpande A, Zhang LQ, Balu R, Yahyavi-Firouz-Abadi N, Badjatia N, Laksari K, Tahsili-Fahadan P

pubmed logopapersJul 1 2025
Cerebrovascular morphology plays a critical role in brain health, influencing cerebral blood flow (CBF) and contributing to the pathogenesis of various neurological diseases. This review examines the anatomical structure of the cerebrovascular network and its variations in healthy and diseased populations and highlights age-related changes and their implications in various neurological conditions. Normal variations, including the completeness and anatomical anomalies of the Circle of Willis and collateral circulation, are discussed in relation to their impact on CBF and susceptibility to ischemic events. Age-related changes in the cerebrovascular system, such as alterations in vessel geometry and density, are explored for their contributions to age-related neurological disorders, including Alzheimer's disease and vascular dementia. Advances in medical imaging and computational methods have enabled automatic quantitative assessment of cerebrovascular structures, facilitating the identification of pathological changes in both acute and chronic cerebrovascular disorders. Emerging technologies, including machine learning and computational fluid dynamics, offer new tools for predicting disease risk and patient outcomes based on vascular morphology. This review underscores the importance of understanding cerebrovascular remodeling for early diagnosis and the development of novel therapeutic approaches in brain diseases.

Multi-scale geometric transformer for sparse-view X-ray 3D foot reconstruction.

Wang W, An L, Han G

pubmed logopapersJul 1 2025
Sparse-View X-ray 3D Foot Reconstruction aims to reconstruct the three-dimensional structure of the foot from sparse-view X-ray images, a challenging task due to data sparsity and limited viewpoints. This paper presents a novel method using a multi-scale geometric Transformer to enhance reconstruction accuracy and detail representation. Geometric position encoding technology and a window mechanism are introduced to divide X-ray images into local areas, finely capturing local features. A multi-scale Transformer module based on Neural Radiance Fields (NeRF) enhances the model's ability to express and capture details in complex structures. An adaptive weight learning strategy further optimizes the Transformer's feature extraction and long-range dependency modelling. Experimental results demonstrate that the proposed method significantly improves the reconstruction accuracy and detail preservation of the foot structure under sparse-view X-ray conditions. The multi-scale geometric Transformer effectively captures local and global features, leading to more accurate and detailed 3D reconstructions. The proposed method advances medical image reconstruction, significantly improving the accuracy and detail preservation of 3D foot reconstructions from sparse-view X-ray images.

Machine learning-based model to predict long-term tumor control and additional interventions following pituitary surgery for Cushing's disease.

Shinya Y, Ghaith AK, Hong S, Erickson D, Bancos I, Herndon JS, Davidge-Pitts CJ, Nguyen RT, Bon Nieves A, Sáez Alegre M, Morshed RA, Pinheiro Neto CD, Peris Celda M, Pollock BE, Meyer FB, Atkinson JLD, Van Gompel JJ

pubmed logopapersJul 1 2025
In this study, the authors aimed to establish a supervised machine learning (ML) model based on multiple tree-based algorithms to predict long-term biochemical outcomes and intervention-free survival (IFS) after endonasal transsphenoidal surgery (ETS) in patients with Cushing's disease (CD). The medical records of patients who underwent ETS for CD between 2013 and 2023 were reviewed. Data were collected on the patient's baseline characteristics, intervention details, histopathology, surgical outcomes, and postoperative endocrine functions. The study's primary outcome was IFS, and the therapeutic outcomes were labeled as "under control" or "treatment failure," depending on whether additional therapeutic interventions after primary ETS were required. The decision tree and random forest classifiers were trained and tested to predict long-term IFS based on unseen data, using an 80/20 cohort split. Data from 150 patients, with a median follow-up period of 56 months, were extracted. In the cohort, 42 (28%) patients required additional intervention for persistent or recurrent CD. Consequently, the IFS rates following ETS alone were 83% at 3 years and 78% at 5 years. Multivariable Cox proportional hazards analysis demonstrated that a smaller tumor diameter that could be detected by MRI (hazard ratio 0.95, 95% CI 0.90-0.99; p = 0.047) was significantly associated with greater IFS. However, the lack of tumor detection on MRI was a poor predictor. The ML-based model using a decision tree model displayed 91% accuracy (95% CI 0.70-0.94, sensitivity 87.0%, specificity 89.0%) in predicting IFS in the unseen test dataset. Random forest analysis revealed that tumor size (mean minimal depth 1.67), Knosp grade (1.75), patient age (1.80), and BMI (1.99) were the four most significant predictors of long-term IFS. The ML algorithm could predict long-term postoperative endocrinological remission in CD with high accuracy, indicating that prognosis may vary not only with previously reported factors such as tumor size, Knosp grade, gross-total resection, and patient age but also with BMI. The decision tree flowchart could potentially stratify patients with CD before ETS, allowing for the selection of personalized treatment options and thereby assisting in determining treatment plans for these patients. This ML model may lead to a deeper understanding of the complex mechanisms of CD by uncovering patterns embedded within the data.

Deep learning-based segmentation of the trigeminal nerve and surrounding vasculature in trigeminal neuralgia.

Halbert-Elliott KM, Xie ME, Dong B, Das O, Wang X, Jackson CM, Lim M, Huang J, Yedavalli VS, Bettegowda C, Xu R

pubmed logopapersJul 1 2025
Preoperative workup of trigeminal neuralgia (TN) consists of identification of neurovascular features on MRI. In this study, the authors apply and evaluate the performance of deep learning models for segmentation of the trigeminal nerve and surrounding vasculature to quantify anatomical features of the nerve and vessels. Six U-Net-based neural networks, each with a different encoder backbone, were trained to label constructive interference in steady-state MRI voxels as nerve, vasculature, or background. A retrospective dataset of 50 TN patients at the authors' institution who underwent preoperative high-resolution MRI in 2022 was utilized to train and test the models. Performance was measured by the Dice coefficient and intersection over union (IoU) metrics. Anatomical characteristics, such as surface area of neurovascular contact and distance to the contact point, were computed and compared between the predicted and ground truth segmentations. Of the evaluated models, the best performing was U-Net with an SE-ResNet50 backbone (Dice score = 0.775 ± 0.015, IoU score = 0.681 ± 0.015). When the SE-ResNet50 backbone was used, the average surface area of neurovascular contact in the testing dataset was 6.90 mm2, which was not significantly different from the surface area calculated from manual segmentation (p = 0.83). The average calculated distance from the brainstem to the contact point was 4.34 mm, which was also not significantly different from manual segmentation (p = 0.29). U-Net-based neural networks perform well for segmenting trigeminal nerve and vessels from preoperative MRI volumes. This technology enables the development of quantitative and objective metrics for radiographic evaluation of TN.
Page 43 of 2222215 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.