Sort by:
Page 290 of 6596585 results

Shih CS, Chiu HW

pubmed logopapersAug 7 2025
This study aims to enhance the accuracy of fetal ultrasound image classification using convolutional neural networks, specifically EfficientNet. The research focuses on data collection, preprocessing, model training, and evaluation at different pregnancy stages: early, midterm, and newborn. EfficientNet showed the best performance, particularly in the newborn stage, demonstrating deep learning's potential to improve classification performance and support clinical workflows.

Luo X, Tahabi FM, Rollins DM, Sawchuk AP

pubmed logopapersAug 7 2025
Routine duplex ultrasound surveillance is recommended after femoral-popliteal and femoral-tibial-pedal vein bypass grafts at various post-operative intervals. Currently, there is no systematic method for bypass graft surveillance using a set of peak systolic velocities (PSVs) collected during these exams. This research aims to explore the use of recurrent neural networks to predict the next set of PSVs, which can then indicate occlusion status. Recurrent neural network models were developed to predict occlusion and stenosis based on one to three prior sets of PSVs, with a sequence-to-sequence model utilized to forecast future PSVs within the stent graft and nearby arteries. The study employed 5-fold cross-validation for model performance comparison, revealing that the BiGRU model outperformed BiLSTM when two or more sets of PSVs were included, demonstrating that increasing duplex ultrasound exams improve prediction accuracy and reduces error rates. This work establishes a basis for integrating comprehensive clinical data, including demographics, comorbidities, symptoms, and other risk factors, with PSVs to enhance lower extremity bypass graft surveillance predictions.

Fernando P, Lyell D, Wang Y, Magrabi F

pubmed logopapersAug 7 2025
The U.S. Food and Drug Administration (FDA) plays an important role in ensuring safety and effectiveness of AI/ML-enabled devices through its regulatory processes. In recent years, there has been an increase in the number of these devices cleared by FDA. This study analyzes 104 FDA-approved ML-enabled medical devices from May 2021 to April 2023, extending previous research to provide a contemporary perspective on this evolving landscape. We examined clinical task, device task, device input and output, ML method and level of autonomy. Most approvals (n = 103) were via the 510(k) premarket notification pathway, indicating substantial equivalence to existing devices. Devices predominantly supported diagnostic tasks (n = 81). The majority of devices used imaging data (n = 99), with CT and MRI being the most common modalities. Device autonomy levels were distributed as follows: 52% assistive (requiring users to confirm or approve AI provided information or decision), 27% autonomous information, and 21% autonomous decision. The prevalence of assistive devices indicates a cautious approach to integrating ML into clinical decision-making, favoring support rather than replacement of human judgment.

Ahn S, Park H, Yoo J, Choi J

pubmed logopapersAug 7 2025
This study proposes RRG-LLM, a model designed to enhance RRG by effectively learning medical domain with minimal computational resources. Initially, LLM is finetuned by LoRA, enabling efficient adaptation to the medical domain. Subsequently, only the linear projection layer that project the image into text is finetuned to extract important information from the radiology image and project it onto the text dimension. Proposed model demonstrated notable improvements in report generation. The performance of ROUGE-L was improved by 0.096 (51.7%) and METEOR by 0.046 (42.85%) compared to the baseline model.

Kim DY, Kim JW, Kim SK, Kim YG

pubmed logopapersAug 7 2025
The diagnosis of craniosynostosis, a condition involving the premature fusion of cranial sutures in infants, is essential for ensuring timely treatment and optimal surgical outcomes. Current diagnostic approaches often require CT scans, which expose children to significant radiation risks. To address this, we present a novel deep learning-based model utilizing multi-view X-ray images for craniosynostosis detection. The proposed model integrates advanced multi-view fusion (MVF) and cross-attention mechanisms, effectively combining features from three X-ray views (AP, lateral right, lateral left) and patient metadata (age, sex). By leveraging these techniques, the model captures comprehensive semantic and structural information for high diagnostic accuracy while minimizing radiation exposure. Tested on a dataset of 882 X-ray images from 294 pediatric patients, the model achieved an AUROC of 0.975, an F1-score of 0.882, a sensitivity of 0.878, and a specificity of 0.937. Grad-CAM visualizations further validated its ability to localize disease-relevant regions using only classification annotations. The model demonstrates the potential to revolutionize pediatric care by providing a safer, cost-effective alternative to CT scans.

Hwang S, Heo S, Hong S, Cha WC, Yoo J

pubmed logopapersAug 7 2025
This study aimed to develop and evaluate an artificial intelligence model to predict 28-day mortality of pneumonia patients at the time of disposition from emergency department (ED). A multicenter retrospective study was conducted on data from pneumonia patients who visited the ED of a tertiary academic hospital for 8 months and from the Medical Information Mart for Intensive Care (MIMIC-IV) database. We combined chest X-ray information, clinical data, and CURB-65 score to develop three models with the CURB-65 score as a baseline. A total of 2,874 ED visits were analyzed. The RSF model using CXR, clinical data and CURB-65 achieved a C-index of 0.872 in test set, significantly outperforming the CURB-65 score. This study developed a prediction model in pneumonia patients' prognosis, highlighting the potential for supporting clinical decision making in ED through multi-modal clinical information.

Hao Z, Chapman BE

pubmed logopapersAug 7 2025
Renal tumors require early diagnosis and precise localization for effective treatment. This study aims to automate renal tumor analysis in abdominal CT images using a cascade 3D U-Net architecture for semantic kidney segmentation. To address challenges like edge detection and small object segmentation, the framework incorporates residual blocks to enhance convergence and efficiency. Comprehensive training configurations, preprocessing, and postprocessing strategies were employed to ensure accurate results. Tested on KiTS2019 data, the method ranked 23rd on the leaderboard (Nov 2024), demonstrating the enhanced cascade 3D U-Net's effectiveness in improving segmentation precision.

Chen L, Yang L, Bedir O

pubmed logopapersAug 7 2025
Medical diagnostics often rely on the interpretation of complex medical images. However, manual analysis and report generation by medical practitioners are time-consuming, and the inherent ambiguity in chest X-rays presents significant challenges for automated systems in producing interpretable results. To address this, we propose Attention-Infused Mask Recurrent Neural Network (AIMR-MediTell), a deep learning framework integrating instance segmentation using Mask RCNN with attention-based feature extraction to identify and highlight abnormal regions in chest X-rays. This framework also incorporates an encoder-decoder structure with pretrained BioWordVec embeddings to generate explanatory reports based on augmented images. We evaluated AIMR-MediTell on the Open-I dataset, achieving a BLEU-4 score of 0.415, outperforming existing models. Our results demonstrate the effectiveness of the proposed model, showing that incorporating masked regions enhances report accuracy and interpretability. By identifying malfunction areas and automating report generation for X-ray images, our approach has the potential to significantly improve the efficiency and accuracy of medical image analysis.

Alshagathrh FM, Schneider J, Househ MS

pubmed logopapersAug 7 2025
This study presents an AI framework for real-time NAFLD detection using ultrasound imaging, addressing operator dependency, imaging variability, and class imbalance. It integrates CNNs with machine learning classifiers and applies preprocessing techniques, including normalization and GAN-based augmentation, to enhance prediction for underrepresented disease stages. Grad-CAM provides visual explanations to support clinical interpretation. Trained on 10,352 annotated images from multiple Saudi centers, the framework achieved 98.9% accuracy and an AUC of 0.99, outperforming baseline CNNs by 12.4% and improving sensitivity for advanced fibrosis and subtle features. Future work will extend multi-class classification, validate performance across settings, and integrate with clinical systems.

Li P, Jin Y, Wang M, Liu F

pubmed logopapersAug 7 2025
Early classification of brain tumors is the key to effective treatment. With advances in medical imaging technology, automated classification algorithms face challenges due to tumor diversity. Although Swin Transformer is effective in handling high-resolution images, it encounters difficulties with small datasets and high computational complexity. This study introduces SparseSwinMDT, a novel model that combines sparse token representation with multipath decision trees. Experimental results show that SparseSwinMDT achieves an accuracy of 99.47% in brain tumor classification, significantly outperforming existing methods while reducing computation time, making it particularly suitable for resource-constrained medical environments.
Page 290 of 6596585 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.