Sort by:
Page 197 of 3963955 results

Preoperative MRI-based deep learning reconstruction and classification model for assessing rectal cancer.

Yuan Y, Ren S, Lu H, Chen F, Xiang L, Chamberlain R, Shao C, Lu J, Shen F, Chen L

pubmed logopapersJul 1 2025
To determine whether deep learning reconstruction (DLR) could improve the image quality of rectal MR images, and to explore the discrimination of the TN stage of rectal cancer by different readers and deep learning classification models, compared with conventional MR images without DLR. Images of high-resolution T2-weighted, diffusion-weighted imaging (DWI), and contrast-enhanced T1-weighted imaging (CE-T1WI) from patients with pathologically diagnosed rectal cancer were retrospectively processed with and without DLR and assessed by five readers. The first two readers measured the signal-to-noise ratio (SNR) and contrast-to-noise ratio (CNR) of the lesions. The overall image quality and lesion display performance for each sequence with and without DLR were independently scored using a five-point scale, and the TN stage of rectal cancer lesions was evaluated by the other three readers. Fifty of the patients were randomly selected to further make a comparison between DLR and traditional denoising filter. Deep learning classification models were developed and compared for the TN stage. Receiver operating characteristic (ROC) curve analysis and decision curve analysis (DCA) were used to evaluate the diagnostic performance of the proposed model. Overall, 178 patients were evaluated. The SNR and CNR of the lesion on images with DLR were significantly higher than those without DLR, for T2WI, DWI and CE-T1WI, respectively (p < 0.0001). A significant difference was observed in overall image quality and lesion display performance between images with and without DLR (p < 0.0001). The image quality scores, SNR, and CNR values of DLR image set were significantly larger than those of original and filter enhancement image sets (all p values < 0.05) for all the three sequences, respectively. The deep learning classification models with DLR achieved good discrimination of the TN stage, with area under the curve (AUC) values of 0.937 (95% CI 0.839-0.977) and 0.824 (95% CI 0.684-0.913) in the test sets, respectively. Deep learning reconstruction and classification models could improve the image quality of rectal MRI images and enhance the diagnostic performance for determining the TN stage of patients with rectal cancer.

Leveraging multithreading on edge computing for smart healthcare based on intelligent multimodal classification approach.

Alghareb FS, Hasan BT

pubmed logopapersJul 1 2025
Medical digitization has been intensively developed in the last decade, leading to paving the path for computer-aided medical diagnosis research. Thus, anomaly detection based on machine and deep learning techniques has been extensively employed in healthcare applications, such as medical imaging classification and monitoring of patients' vital signs. To effectively leverage digitized medical records for identifying challenges in healthcare, this manuscript presents a smart Clinical Decision Support System (CDSS) dedicated for medical multimodal data automated diagnosis. A smart healthcare system necessitating medical data management and decision-making is proposed. To deliver timely rapid diagnosis, thread-level parallelism (TLP) is utilized for parallel distribution of classification tasks on three edge computing devices, each employing an AI module for on-device AI classifications. In comparison to existing machine and deep learning classification techniques, the proposed multithreaded architecture realizes a hybrid (ML and DL) processing module on each edge node. In this context, the presented edge computing-based parallel architecture captures a high level of parallelism, tailored for dealing with multiple categories of medical records. The cluster of the proposed architecture encompasses three edge computing Raspberry Pi devices and an edge server. Furthermore, lightweight neural networks, such as MobileNet, EfficientNet, and ResNet18, are trained and optimized based on genetic algorithms to provide classification of brain tumor, pneumonia, and colon cancer. Model deployment was conducted based on Python programming, where PyCharm is run on the edge server whereas Thonny is installed on edge nodes. In terms of accuracy, the proposed GA-based optimized ResNet18 for pneumonia diagnosis achieves 93.59% predictive accuracy and reduces the classifier computation complexity by 33.59%, whereas an outstanding accuracy of 99.78% and 100% were achieved with EfficientNet-v2 for brain tumor and colon cancer prediction, respectively, while both models preserving a reduction of 25% in the model's classifier. More importantly, an inference speedup of 28.61% and 29.08% was obtained by implementing parallel 2 DL and 3 DL threads configurations compared to the sequential implementation, respectively. Thus, the proposed multimodal-multithreaded architecture offers promising prospects for comprehensive and accurate anomaly detection of patients' medical imaging and vital signs. To summarize, our proposed architecture contributes to the advancement of healthcare services, aiming to improve patient medical diagnosis and therapy outcomes.

Deep learning based classification of tibio-femoral knee osteoarthritis from lateral view knee joint X-ray images.

Abdullah SS, Rajasekaran MP, Hossen MJ, Wong WK, Ng PK

pubmed logopapersJul 1 2025
Design an effective deep learning-driven method to locate and classify the tibio-femoral knee joint space width (JSW) with respect to both anterior-posterior (AP) and lateral views. Compare the results and see how successfully a deep learning approach can locate and classify tibio-femoral knee joint osteoarthritis from both anterior-posterior (AP) and lateral-view knee joint x-ray images. To evaluate the performance of a deep learning approach to classify and compare radiographic tibio-femoral knee joint osteoarthritis from both AP and lateral view knee joint digital X-ray images. We use 4334 data points (knee X-ray images) for this study. This paper introduces a methodology to locate, classify, and compare the outcomes of tibio-femoral knee joint osteoarthritis from both AP and lateral knee joint x-ray images. We have fine-tuned DenseNet 201 with transfer learning to extract the features to detect and classify tibio-femoral knee joint osteoarthritis from both AP view and lateral view knee joint X-ray images. The proposed model is compared with some classifiers. The proposed model locate the tibio femoral knee JSW localization accuracy at 98.12% (lateral view) and 99.32% (AP view). The classification accuracy with respect to the lateral view is 92.42% and the AP view is 98.57%, which indicates the performance of automatic detection and classification of tibio-femoral knee joint osteoarthritis with respect to both views (AP and lateral views).We represent the first automated deep learning approach to classify tibio-femoral osteoarthritis on both the AP view and the lateral view, respectively. The proposed deep learning approach trained on the femur and tibial bone regions from both AP view and lateral view digital X-ray images. The proposed model performs better at locating and classifying tibio femoral knee joint osteoarthritis than the existing approaches. The proposed approach will be helpful for the clinicians/medical experts to analyze the progression of tibio-femoral knee OA in different views. The proposed approach performs better in AP view than Lateral view. So, when compared to other continuing existing architectures/models, the proposed model offers exceptional outcomes with fine-tuning.

MedGround-R1: Advancing Medical Image Grounding via Spatial-Semantic Rewarded Group Relative Policy Optimization

Huihui Xu, Yuanpeng Nie, Hualiang Wang, Ying Chen, Wei Li, Junzhi Ning, Lihao Liu, Hongqiu Wang, Lei Zhu, Jiyao Liu, Xiaomeng Li, Junjun He

arxiv logopreprintJul 1 2025
Medical Image Grounding (MIG), which involves localizing specific regions in medical images based on textual descriptions, requires models to not only perceive regions but also deduce spatial relationships of these regions. Existing Vision-Language Models (VLMs) for MIG often rely on Supervised Fine-Tuning (SFT) with large amounts of Chain-of-Thought (CoT) reasoning annotations, which are expensive and time-consuming to acquire. Recently, DeepSeek-R1 demonstrated that Large Language Models (LLMs) can acquire reasoning abilities through Group Relative Policy Optimization (GRPO) without requiring CoT annotations. In this paper, we adapt the GRPO reinforcement learning framework to VLMs for Medical Image Grounding. We propose the Spatial-Semantic Rewarded Group Relative Policy Optimization to train the model without CoT reasoning annotations. Specifically, we introduce Spatial-Semantic Rewards, which combine spatial accuracy reward and semantic consistency reward to provide nuanced feedback for both spatially positive and negative completions. Additionally, we propose to use the Chain-of-Box template, which integrates visual information of referring bounding boxes into the <think> reasoning process, enabling the model to explicitly reason about spatial regions during intermediate steps. Experiments on three datasets MS-CXR, ChestX-ray8, and M3D-RefSeg demonstrate that our method achieves state-of-the-art performance in Medical Image Grounding. Ablation studies further validate the effectiveness of each component in our approach. Code, checkpoints, and datasets are available at https://github.com/bio-mlhui/MedGround-R1

Iterative Misclassification Error Training (IMET): An Optimized Neural Network Training Technique for Image Classification

Ruhaan Singh, Sreelekha Guggilam

arxiv logopreprintJul 1 2025
Deep learning models have proven to be effective on medical datasets for accurate diagnostic predictions from images. However, medical datasets often contain noisy, mislabeled, or poorly generalizable images, particularly for edge cases and anomalous outcomes. Additionally, high quality datasets are often small in sample size that can result in overfitting, where models memorize noise rather than learn generalizable patterns. This in particular, could pose serious risks in medical diagnostics where the risk associated with mis-classification can impact human life. Several data-efficient training strategies have emerged to address these constraints. In particular, coreset selection identifies compact subsets of the most representative samples, enabling training that approximates full-dataset performance while reducing computational overhead. On the other hand, curriculum learning relies on gradually increasing training difficulty and accelerating convergence. However, developing a generalizable difficulty ranking mechanism that works across diverse domains, datasets, and models while reducing the computational tasks and remains challenging. In this paper, we introduce Iterative Misclassification Error Training (IMET), a novel framework inspired by curriculum learning and coreset selection. The IMET approach is aimed to identify misclassified samples in order to streamline the training process, while prioritizing the model's attention to edge case senarious and rare outcomes. The paper evaluates IMET's performance on benchmark medical image classification datasets against state-of-the-art ResNet architectures. The results demonstrating IMET's potential for enhancing model robustness and accuracy in medical image analysis are also presented in the paper.

Deep Learning and Radiomics Discrimination of Coronary Chronic Total Occlusion and Subtotal Occlusion using CTA.

Zhou Z, Bo K, Gao Y, Zhang W, Zhang H, Chen Y, Chen Y, Wang H, Zhang N, Huang Y, Mao X, Gao Z, Zhang H, Xu L

pubmed logopapersJul 1 2025
Coronary chronic total occlusion (CTO) and subtotal occlusion (STO) pose diagnostic challenges, differing in treatment strategies. Artificial intelligence and radiomics are promising tools for accurate discrimination. This study aimed to develop deep learning (DL) and radiomics models using coronary computed tomography angiography (CCTA) to differentiate CTO from STO lesions and compare their performance with that of the conventional method. CTO and STO were identified retrospectively from a tertiary hospital and served as training and validation sets for developing and validating the DL and radiomics models to distinguish CTO from STO. An external test cohort was recruited from two additional tertiary hospitals with identical eligibility criteria. All participants underwent CCTA within 1 month before invasive coronary angiography. A total of 581 participants (mean age, 50 years ± 11 [SD]; 474 [81.6%] men) with 600 lesions were enrolled, including 403 CTO and 197 STO lesions. The DL and radiomics models exhibited better discrimination performance than the conventional method, with areas under the curve of 0.908 and 0.860, respectively, vs. 0.794 in the validation set (all p<0.05), and 0.893 and 0.827, respectively, vs. 0.746 in the external test set (all p<0.05). The proposed CCTA-based DL and radiomics models achieved efficient and accurate discrimination of coronary CTO and STO.

Breast cancer detection based on histological images using fusion of diffusion model outputs.

Akbari Y, Abdullakutty F, Al Maadeed S, Bouridane A, Hamoudi R

pubmed logopapersJul 1 2025
The precise detection of breast cancer in histopathological images remains a critical challenge in computational pathology, where accurate tissue segmentation significantly enhances diagnostic accuracy. This study introduces a novel approach leveraging a Conditional Denoising Diffusion Probabilistic Model (DDPM) to improve breast cancer detection through advanced segmentation and feature fusion. The method employs a conditional channel within the DDPM framework, first trained on a breast cancer histopathology dataset and extended to additional datasets to achieve regional-level segmentation of tumor areas and other tissue regions. These segmented regions, combined with predicted noise from the diffusion model and original images, are processed through an EfficientNet-B0 network to extract enhanced features. A transformer decoder then fuses these features to generate final detection results. Extensive experiments optimizing the network architecture and fusion strategies were conducted, and the proposed method was evaluated across four distinct datasets, achieving a peak accuracy of 92.86% on the BRACS dataset, 100% on the BreCaHAD dataset, 96.66% the ICIAR2018 dataset. This approach represents a significant advancement in computational pathology, offering a robust tool for breast cancer detection with potential applications in broader medical imaging contexts.

Development and validation of CT-based fusion model for preoperative prediction of invasion and lymph node metastasis in adenocarcinoma of esophagogastric junction.

Cao M, Xu R, You Y, Huang C, Tong Y, Zhang R, Zhang Y, Yu P, Wang Y, Chen W, Cheng X, Zhang L

pubmed logopapersJul 1 2025
In the context of precision medicine, radiomics has become a key technology in solving medical problems. For adenocarcinoma of esophagogastric junction (AEG), developing a preoperative CT-based prediction model for AEG invasion and lymph node metastasis is crucial. We retrospectively collected 256 patients with AEG from two centres. The radiomics features were extracted from the preoperative diagnostic CT images, and the feature selection method and machine learning method were applied to reduce the feature size and establish the predictive imaging features. By comparing the three machine learning methods, the best radiomics nomogram was selected, and the average AUC was obtained by 20 repeats of fivefold cross-validation for comparison. The fusion model was constructed by logistic regression combined with clinical factors. On this basis, ROC curve, calibration curve and decision curve of the fusion model are constructed. The predictive efficacy of fusion model for tumour invasion depth was higher than that of radiomics nomogram, with an AUC of 0.764 vs. 0.706 in the test set, P < 0.001, internal validation set 0.752 vs. 0.697, P < 0.001, and external validation set 0.756 vs. 0.687, P < 0.001, respectively. The predictive efficacy of the lymph node metastasis fusion model was higher than that of the radiomics nomogram, with an AUC of 0.809 vs. 0.732 in the test set, P < 0.001, internal validation set 0.841 vs. 0.718, P < 0.001, and external validation set 0.801 vs. 0.680, P < 0.001, respectively. We have developed a fusion model combining radiomics and clinical risk factors, which is crucial for the accurate preoperative diagnosis and treatment of AEG, advancing precision medicine. It may also spark discussions on the imaging feature differences between AEG and GC (Gastric cancer).

Generative AI for weakly supervised segmentation and downstream classification of brain tumors on MR images.

Yoo JJ, Namdar K, Wagner MW, Yeom KW, Nobre LF, Tabori U, Hawkins C, Ertl-Wagner BB, Khalvati F

pubmed logopapersJul 1 2025
Segmenting abnormalities is a leading problem in medical imaging. Using machine learning for segmentation generally requires manually annotated segmentations, demanding extensive time and resources from radiologists. We propose a weakly supervised approach that utilizes binary image-level labels, which are much simpler to acquire, rather than manual annotations to segment brain tumors on magnetic resonance images. The proposed method generates healthy variants of cancerous images for use as priors when training the segmentation model. However, using weakly supervised segmentations for downstream tasks such as classification can be challenging due to occasional unreliable segmentations. To address this, we propose using the generated non-cancerous variants to identify the most effective segmentations without requiring ground truths. Our proposed method generates segmentations that achieve Dice coefficients of 79.27% on the Multimodal Brain Tumor Segmentation (BraTS) 2020 dataset and 73.58% on an internal dataset of pediatric low-grade glioma (pLGG), which increase to 88.69% and 80.29%, respectively, when removing suboptimal segmentations identified using the proposed method. Using the segmentations for tumor classification results with Area Under the Characteristic Operating Curve (AUC) of 93.54% and 83.74% on the BraTS and pLGG datasets, respectively. These are comparable to using manual annotations which achieve AUCs of 95.80% and 83.03% on the BraTS and pLGG datasets, respectively.

Enhanced pulmonary nodule detection with U-Net, YOLOv8, and swin transformer.

Wang X, Wu H, Wang L, Chen J, Li Y, He X, Chen T, Wang M, Guo L

pubmed logopapersJul 1 2025
Lung cancer remains the leading cause of cancer-related mortality worldwide, emphasizing the critical need for early pulmonary nodule detection to improve patient outcomes. Current methods encounter challenges in detecting small nodules and exhibit high false positive rates, placing an additional diagnostic burden on radiologists. This study aimed to develop a two-stage deep learning model integrating U-Net, Yolov8s, and the Swin transformer to enhance pulmonary nodule detection in computer tomography (CT) images, particularly for small nodules, with the goal of improving detection accuracy and reducing false positives. We utilized the LUNA16 dataset (888 CT scans) and an additional 308 CT scans from Tianjin Chest Hospital. Images were preprocessed for consistency. The proposed model first employs U-Net for precise lung segmentation, followed by Yolov8s augmented with the Swin transformer for nodule detection. The Shape-aware IoU (SIoU) loss function was implemented to improve bounding box predictions. For the LUNA16 dataset, the model achieved a precision of 0.898, a recall of 0.851, and a mean average precision at 50% IoU (mAP50) of 0.879, outperforming state-of-the-art models. The Tianjin Chest Hospital dataset has a precision of 0.855, a recall of 0.872, and an mAP50 of 0.862. This study presents a two-stage deep learning model that leverages U-Net, Yolov8s, and the Swin transformer for enhanced pulmonary nodule detection in CT images. The model demonstrates high accuracy and a reduced false positive rate, suggesting its potential as a useful tool for early lung cancer diagnosis and treatment.
Page 197 of 3963955 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.