Sort by:
Page 59 of 2202194 results

Cephalometric landmark detection using vision transformers with direct coordinate prediction.

Laitenberger F, Scheuer HT, Scheuer HA, Lilienthal E, You S, Friedrich RE

pubmed logopapersJul 1 2025
Cephalometric Landmark Detection (CLD), i.e. annotating interest points in lateral X-ray images, is the crucial first step of every orthodontic therapy. While CLD has immense potential for automation using Deep Learning methods, carefully crafted contemporary approaches using convolutional neural networks and heatmap prediction do not qualify for large-scale clinical application due to insufficient performance. We propose a novel approach using Vision Transformers (ViTs) with direct coordinate prediction, avoiding the memory-intensive heatmap prediction common in previous work. Through extensive ablation studies comparing our method against contemporary CNN architectures (ConvNext V2) and heatmap-based approaches (Segformer), we demonstrate that ViTs with coordinate prediction achieve superior performance with more than 2 mm improvement in mean radial error compared to state-of-the-art CLD methods. Our results show that while non-adapted CNN architectures perform poorly on the given task, contemporary approaches may be too tailored to specific datasets, failing to generalize to different and especially sparse datasets. We conclude that using general-purpose Vision Transformers with direct coordinate prediction shows great promise for future research on CLD and medical computer vision.

Comparison of Deep Learning Models for fast and accurate dose map prediction in Microbeam Radiation Therapy.

Arsini L, Humphreys J, White C, Mentzel F, Paino J, Bolst D, Caccia B, Cameron M, Ciardiello A, Corde S, Engels E, Giagu S, Rosenfeld A, Tehei M, Tsoi AC, Vogel S, Lerch M, Hagenbuchner M, Guatelli S, Terracciano CM

pubmed logopapersJul 1 2025
Microbeam Radiation Therapy (MRT) is an innovative radiotherapy modality which uses highly focused synchrotron-generated X-ray microbeams. Current pre-clinical research in MRT mostly rely on Monte Carlo (MC) simulations for dose estimation, which are highly accurate but computationally intensive. Recently, Deep Learning (DL) dose engines have been proved effective in generating fast and reliable dose distributions in different RT modalities. However, relatively few studies compare different models on the same task. This work aims to compare a Graph-Convolutional-Network-based DL model, developed in the context of Very High Energy Electron RT, to the Convolutional 3D U-Net that we recently implemented for MRT dose predictions. The two DL solutions are trained with 3D dose maps, generated with the MC-Toolkit Geant4, in rats used in MRT pre-clinical research. The models are evaluated against Geant4 simulations, used as ground truth, and are assessed in terms of Mean Absolute Error, Mean Relative Error, and a voxel-wise version of the γ-index. Also presented are specific comparisons of predictions in relevant tumor regions, tissues boundaries and air pockets. The two models are finally compared from the perspective of the execution time and size. This study finds that the two models achieve comparable overall performance. Main differences are found in their dosimetric accuracy within specific regions, such as air pockets, and their respective inference times. Consequently, the choice between models should be guided primarily by data structure and time constraints, favoring the graph-based method for its flexibility or the 3D U-Net for its faster execution.

Deep learning radiomics and mediastinal adipose tissue-based nomogram for preoperative prediction of postoperative‌ brain metastasis risk in non-small cell lung cancer.

Niu Y, Jia HB, Li XM, Huang WJ, Liu PP, Liu L, Liu ZY, Wang QJ, Li YZ, Miao SD, Wang RT, Duan ZX

pubmed logopapersJul 1 2025
Brain metastasis (BM) significantly affects the prognosis of non-small cell lung cancer (NSCLC) patients. Increasing evidence suggests that adipose tissue influences cancer progression and metastasis. This study aimed to develop a predictive nomogram integrating mediastinal fat area (MFA) and deep learning (DL)-derived tumor characteristics to stratify postoperative‌ BM risk in NSCLC patients. A retrospective cohort of 585 surgically resected NSCLC patients was analyzed. Preoperative computed tomography (CT) scans were utilized to quantify MFA using ImageJ software (radiologist-validated measurements). Concurrently, a DL algorithm extracted tumor radiomic features, generating a deep learning brain metastasis score (DLBMS). Multivariate logistic regression identified independent BM predictors, which were incorporated into a nomogram. Model performance was assessed via area under the receiver operating characteristic curve (AUC), calibration plots, integrated discrimination improvement (IDI), net reclassification improvement (NRI), and decision curve analysis (DCA). Multivariate analysis identified N stage, EGFR mutation status, MFA, and DLBMS as independent predictors of BM. The nomogram achieved superior discriminative capacity (AUC: 0.947 in the test set), significantly outperforming conventional models. MFA contributed substantially to predictive accuracy, with IDI and NRI values confirming its incremental utility (IDI: 0.123, <i>P</i> < 0.001; NRI: 0.386, <i>P</i> = 0.023). Calibration analysis demonstrated strong concordance between predicted and observed BM probabilities, while DCA confirmed clinical net benefit across risk thresholds. This DL-enhanced nomogram, incorporating MFA and tumor radiomics, represents a robust and clinically useful tool for preoperative prediction of postoperative BM risk in NSCLC. The integration of adipose tissue metrics with advanced imaging analytics advances personalized prognostic assessment in NSCLC patients. The online version contains supplementary material available at 10.1186/s12885-025-14466-5.

Improving YOLO-based breast mass detection with transfer learning pretraining on the OPTIMAM Mammography Image Database.

Ho PS, Tsai HY, Liu I, Lee YY, Chan SW

pubmed logopapersJul 1 2025
Early detection of breast cancer through mammography significantly improves survival rates. However, high false positive and false negative rates remain a challenge. Deep learning-based computer-aided diagnosis systems can assist in lesion detection, but their performance is often limited by the availability of labeled clinical data. This study systematically evaluated the effectiveness of transfer learning, image preprocessing techniques, and the latest You Only Look Once (YOLO) model (v9) for optimizing breast mass detection models on small proprietary datasets. We examined 133 mammography images containing masses and assessed various preprocessing strategies, including cropping and contrast enhancement. We further investigated the impact of transfer learning using the OPTIMAM Mammography Image Database (OMI-DB) compared with training on proprietary data. The performance of YOLOv9 was evaluated against YOLOv7 to determine improvements in detection accuracy. Pretraining on the OMI-DB dataset with cropped images significantly improved model performance, with YOLOv7 achieving a 13.9 % higher mean average precision (mAP) and 13.2 % higher F1-score compared to training only on proprietary data. Among the tested models and configurations, the best results were obtained using YOLOv9 pretrained OMI-DB and fine-tuned with cropped proprietary images, yielding an mAP of 73.3 % ± 16.7 % and an F1-score of 76.0 % ± 13.4 %, under this condition, YOLOv9 outperformed YOLOv7 by 8.1 % in mAP and 9.2 % in F1-score. This study provides a systematic evaluation of transfer learning and preprocessing techniques for breast mass detection in small datasets. Our results demonstrating that YOLOv9 with OMI-DB pretraining significantly enhances the performance of breast mass detection models while reducing training time, providing a valuable guideline for optimizing deep learning models in data-limited clinical applications.

Artificial Intelligence in Obstetric and Gynecological MR Imaging.

Saida T, Gu W, Hoshiai S, Ishiguro T, Sakai M, Amano T, Nakahashi Y, Shikama A, Satoh T, Nakajima T

pubmed logopapersJul 1 2025
This review explores the significant progress and applications of artificial intelligence (AI) in obstetrics and gynecological MRI, charting its development from foundational algorithmic techniques to deep learning strategies and advanced radiomics. This review features research published over the last few years that has used AI with MRI to identify specific conditions such as uterine leiomyosarcoma, endometrial cancer, cervical cancer, ovarian tumors, and placenta accreta. In addition, it covers studies on the application of AI for segmentation and quality improvement in obstetrics and gynecology MRI. The review also outlines the existing challenges and envisions future directions for AI research in this domain. The growing accessibility of extensive datasets across various institutions and the application of multiparametric MRI are significantly enhancing the accuracy and adaptability of AI. This progress has the potential to enable more accurate and efficient diagnosis, offering opportunities for personalized medicine in the field of obstetrics and gynecology.

Leveraging multithreading on edge computing for smart healthcare based on intelligent multimodal classification approach.

Alghareb FS, Hasan BT

pubmed logopapersJul 1 2025
Medical digitization has been intensively developed in the last decade, leading to paving the path for computer-aided medical diagnosis research. Thus, anomaly detection based on machine and deep learning techniques has been extensively employed in healthcare applications, such as medical imaging classification and monitoring of patients' vital signs. To effectively leverage digitized medical records for identifying challenges in healthcare, this manuscript presents a smart Clinical Decision Support System (CDSS) dedicated for medical multimodal data automated diagnosis. A smart healthcare system necessitating medical data management and decision-making is proposed. To deliver timely rapid diagnosis, thread-level parallelism (TLP) is utilized for parallel distribution of classification tasks on three edge computing devices, each employing an AI module for on-device AI classifications. In comparison to existing machine and deep learning classification techniques, the proposed multithreaded architecture realizes a hybrid (ML and DL) processing module on each edge node. In this context, the presented edge computing-based parallel architecture captures a high level of parallelism, tailored for dealing with multiple categories of medical records. The cluster of the proposed architecture encompasses three edge computing Raspberry Pi devices and an edge server. Furthermore, lightweight neural networks, such as MobileNet, EfficientNet, and ResNet18, are trained and optimized based on genetic algorithms to provide classification of brain tumor, pneumonia, and colon cancer. Model deployment was conducted based on Python programming, where PyCharm is run on the edge server whereas Thonny is installed on edge nodes. In terms of accuracy, the proposed GA-based optimized ResNet18 for pneumonia diagnosis achieves 93.59% predictive accuracy and reduces the classifier computation complexity by 33.59%, whereas an outstanding accuracy of 99.78% and 100% were achieved with EfficientNet-v2 for brain tumor and colon cancer prediction, respectively, while both models preserving a reduction of 25% in the model's classifier. More importantly, an inference speedup of 28.61% and 29.08% was obtained by implementing parallel 2 DL and 3 DL threads configurations compared to the sequential implementation, respectively. Thus, the proposed multimodal-multithreaded architecture offers promising prospects for comprehensive and accurate anomaly detection of patients' medical imaging and vital signs. To summarize, our proposed architecture contributes to the advancement of healthcare services, aiming to improve patient medical diagnosis and therapy outcomes.

Generative Artificial Intelligence in Prostate Cancer Imaging.

Haque F, Simon BD, Özyörük KB, Harmon SA, Türkbey B

pubmed logopapersJul 1 2025
Prostate cancer (PCa) is the second most common cancer in men and has a significant health and social burden, necessitating advances in early detection, prognosis, and treatment strategies. Improvement in medical imaging has significantly impacted early PCa detection, characterization, and treatment planning. However, with an increasing number of patients with PCa and comparatively fewer PCa imaging experts, interpreting large numbers of imaging data is burdensome, time-consuming, and prone to variability among experts. With the revolutionary advances of artificial intelligence (AI) in medical imaging, image interpretation tasks are becoming easier and exhibit the potential to reduce the workload on physicians. Generative AI (GenAI) is a recently popular sub-domain of AI that creates new data instances, often to resemble patterns and characteristics of the real data. This new field of AI has shown significant potential for generating synthetic medical images with diverse and clinically relevant information. In this narrative review, we discuss the basic concepts of GenAI and cover the recent application of GenAI in the PCa imaging domain. This review will help the readers understand where the PCa research community stands in terms of various medical image applications like generating multi-modal synthetic images, image quality improvement, PCa detection, classification, and digital pathology image generation. We also address the current safety concerns, limitations, and challenges of GenAI for technical and clinical adaptation, as well as the limitations of current literature, potential solutions, and future directions with GenAI for the PCa community.

Prediction Crohn's Disease Activity Using Computed Tomography Enterography-Based Radiomics and Serum Markers.

Wang P, Liu Y, Wang Y

pubmed logopapersJun 30 2025
Accurate stratification of the activity index of Crohn's disease (CD) using computed tomography enterography (CTE) radiomics and serum markers can aid in predicting disease progression and assist physicians in personalizing therapeutic regimens for patients with CD. This retrospective study enrolled 233 patients diagnosed with CD between January 2019 and August 2024. Patients were divided into training and testing cohorts at a ratio of 7:3 and further categorized into remission, mild active phase, and moderate-severe active phase groups based on simple endoscopic score for CD (SEC-CD). Radiomics features were extracted from CTE venous images, and T-test and least absolute shrinkage and selection operator (LASSO) regression were applied for feature selection. The serum markers were selected based on the variance analysis. We also developed a random forest (RF) model for multi-class stratification of CD. The model performance was evaluated by the area under the receiver operating characteristic curve (AUC) and quantified the contribution of each feature in the dataset to CD activity via Shapley additive exPlanations (SHAP) values. Finally, we enrolled gender, radiomics scores, and serum scores to develop a nomogram model to verify the effectiveness of feature extraction. 14 non-zero coefficient radiomics features and six serum markers with significant differences (P<0.01) were ultimately selected to predict CD activity. The AUC (micro/macro) for the ensemble machine learning model combining the radiomics features and serum markers is 0.931/0.928 for three-class. The AUC for the remission phase, the mild active phase, and the moderate-severe active phase were 0.983, 0.852, and 0.917, respectively. The mean AUC for the nomogram model was 0.940. A radiomics model was developed by integrating radiomics and serum markers of CD patients, achieving enhanced consistency with SEC-CD in grade CD. This model has the potential to assist clinicians in accurate diagnosis and treatment.

U-Net-based architecture with attention mechanisms and Bayesian Optimization for brain tumor segmentation using MR images.

Ramalakshmi K, Krishna Kumari L

pubmed logopapersJun 30 2025
As technological innovation in computers has advanced, radiologists may now diagnose brain tumors (BT) with the use of artificial intelligence (AI). In the medical field, early disease identification enables further therapies, where the use of AI systems is essential for time and money savings. The difficulties presented by various forms of Magnetic Resonance (MR) imaging for BT detection are frequently not addressed by conventional techniques. To get around frequent problems with traditional tumor detection approaches, deep learning techniques have been expanded. Thus, for BT segmentation utilizing MR images, a U-Net-based architecture combined with Attention Mechanisms has been developed in this work. Moreover, by fine-tuning essential variables, Hyperparameter Optimization (HPO) is used using the Bayesian Optimization Algorithm to strengthen the segmentation model's performance. Tumor regions are pinpointed for segmentation using Region-Adaptive Thresholding technique, and the segmentation results are validated against ground truth annotated images to assess the performance of the suggested model. Experiments are conducted using the LGG, Healthcare, and BraTS 2021 MRI brain tumor datasets. Lastly, the importance of the suggested model has been demonstrated through comparing several metrics, such as IoU, accuracy, and DICE Score, with current state-of-the-art methods. The U-Net-based method gained a higher DICE score of 0.89687 in the segmentation of MRI-BT.

Machine learning methods for sex estimation of sub-adults using cranial computed tomography images.

Syed Mohd Hamdan SN, Faizal Abdullah ERM, Wen KJ, Al-Adawiyah Rahmat R, Wan Ibrahim WI, Abd Kadir KA, Ibrahim N

pubmed logopapersJun 30 2025
This research aimed to compare the classification accuracy of three machine learning (ML) methods (random forest (RF), support vector machines (SVM), linear discriminant analysis (LDA)) for sex estimation of sub-adults using cranial computed tomography (CCT) images. A total of 521 CCT scans from sub-adult Malaysians aged 0 to 20 were analysed using Mimics software (Materialise Mimics Ver. 21). Plane-to-plane (PTP) protocol was used for measuring 14 chosen craniometric parameters. A trio of machine learning algorithms RF, SVM, and LDA with GridSearchCV was used to produce classification models for sex estimation. In addition, performance was measured in the form of accuracy, precision, recall, and F1-score, among others. RF produced testing accuracy of 73%, with the best hyperparameters of max_depth = 6, max_samples = 40, and n_estimators = 45. SVM obtained an accuracy of 67% with the best hyperparameters: learning rate (C) = 10, gamma = 0.01, and kernel = radial basis function (RBF). LDA obtained the lowest accuracy of 65% with shrinkage of 0.02. Among the tested ML methods, RF showed the highest testing accuracy in comparison to SVM and LDA. This is the first AI-based classification model that can be used for estimating sex in sub-adults using CCT scans.
Page 59 of 2202194 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.