Sort by:
Page 9 of 34334 results

Fetal-Net: enhancing Maternal-Fetal ultrasound interpretation through Multi-Scale convolutional neural networks and Transformers.

Islam U, Ali YA, Al-Razgan M, Ullah H, Almaiah MA, Tariq Z, Wazir KM

pubmed logopapersJul 15 2025
Ultrasound imaging plays an important role in fetal growth and maternal-fetal health evaluation, but due to the complicated anatomy of the fetus and image quality fluctuation, its interpretation is quite challenging. Although deep learning include Convolution Neural Networks (CNNs) have been promising, they have largely been limited to one task or the other, such as the segmentation or detection of fetal structures, thus lacking an integrated solution that accounts for the intricate interplay between anatomical structures. To overcome these limitations, Fetal-Net-a new deep learning architecture that integrates Multi-Scale-CNNs and transformer layers-was developed. The model was trained on a large, expertly annotated set of more than 12,000 ultrasound images across different anatomical planes for effective identification of fetal structures and anomaly detection. Fetal-Net achieved excellent performance in anomaly detection, with precision (96.5%), accuracy (97.5%), and recall (97.8%) showed robustness factor against various imaging settings, making it a potent means of augmenting prenatal care through refined ultrasound image interpretation.

Motion artifacts and image quality in stroke MRI: associated factors and impact on AI and human diagnostic accuracy.

Krag CH, Müller FC, Gandrup KL, Andersen MB, Møller JM, Liu ML, Rud A, Krabbe S, Al-Farra L, Nielsen M, Kruuse C, Boesen MP

pubmed logopapersJul 15 2025
To assess the prevalence of motion artifacts and the factors associated with them in a cohort of suspected stroke patients, and to determine their impact on diagnostic accuracy for both AI and radiologists. This retrospective cross-sectional study included brain MRI scans of consecutive adult suspected stroke patients from a non-comprehensive Danish stroke center between January and April 2020. An expert neuroradiologist identified acute ischemic, hemorrhagic, and space-occupying lesions as references. Two blinded radiology residents rated MRI image quality and motion artifacts. The diagnostic accuracy of a CE-marked deep learning tool was compared to that of radiology reports. Multivariate analysis examined associations between patient characteristics and motion artifacts. 775 patients (68 years ± 16, 420 female) were included. Acute ischemic, hemorrhagic, and space-occupying lesions were found in 216 (27.9%), 12 (1.5%), and 20 (2.6%). Motion artifacts were present in 57 (7.4%). Increasing age (OR per decade, 1.60; 95% CI: 1.26, 2.09; p < 0.001) and limb motor symptoms (OR, 2.36; 95% CI: 1.32, 4.20; p = 0.003) were independently associated with motion artifacts in multivariate analysis. Motion artifacts significantly reduced the accuracy of detecting hemorrhage. This reduction was greater for the AI tool (from 88 to 67%; p < 0.001) than for radiology reports (from 100 to 93%; p < 0.001). Ischemic and space-occupying lesion detection was not significantly affected. Motion artifacts are common in suspected stroke patients, particularly in the elderly and patients with motor symptoms, reducing accuracy for hemorrhage detection by both AI and radiologists. Question Motion artifacts reduce the quality of MRI scans, but it is unclear which factors are associated with them and how they impact diagnostic accuracy. Findings Motion artifacts occurred in 7% of suspected stroke MRI scans, associated with higher patient age and motor symptoms, lowering hemorrhage detection by AI and radiologists. Clinical relevance Motion artifacts in stroke brain MRIs significantly reduce the diagnostic accuracy of human and AI detection of intracranial hemorrhages. Elderly patients and those with motor symptoms may benefit from a greater focus on motion artifact prevention and reduction.

A Lightweight and Robust Framework for Real-Time Colorectal Polyp Detection Using LOF-Based Preprocessing and YOLO-v11n

Saadat Behzadi, Danial Sharifrazi, Bita Mesbahzadeh, Javad Hassannataj Joloudarid, Roohallah Alizadehsani

arxiv logopreprintJul 14 2025
Objectives: Timely and accurate detection of colorectal polyps plays a crucial role in diagnosing and preventing colorectal cancer, a major cause of mortality worldwide. This study introduces a new, lightweight, and efficient framework for polyp detection that combines the Local Outlier Factor (LOF) algorithm for filtering noisy data with the YOLO-v11n deep learning model. Study design: An experimental study leveraging deep learning and outlier removal techniques across multiple public datasets. Methods: The proposed approach was tested on five diverse and publicly available datasets: CVC-ColonDB, CVC-ClinicDB, Kvasir-SEG, ETIS, and EndoScene. Since these datasets originally lacked bounding box annotations, we converted their segmentation masks into suitable detection labels. To enhance the robustness and generalizability of our model, we apply 5-fold cross-validation and remove anomalous samples using the LOF method configured with 30 neighbors and a contamination ratio of 5%. Cleaned data are then fed into YOLO-v11n, a fast and resource-efficient object detection architecture optimized for real-time applications. We train the model using a combination of modern augmentation strategies to improve detection accuracy under diverse conditions. Results: Our approach significantly improves polyp localization performance, achieving a precision of 95.83%, recall of 91.85%, F1-score of 93.48%, [email protected] of 96.48%, and [email protected]:0.95 of 77.75%. Compared to previous YOLO-based methods, our model demonstrates enhanced accuracy and efficiency. Conclusions: These results suggest that the proposed method is well-suited for real-time colonoscopy support in clinical settings. Overall, the study underscores how crucial data preprocessing and model efficiency are when designing effective AI systems for medical imaging.

Establishing an AI-based diagnostic framework for pulmonary nodules in computed tomography.

Jia R, Liu B, Ali M

pubmed logopapersJul 12 2025
Pulmonary nodules seen by computed tomography (CT) can be benign or malignant, and early detection is important for optimal management. The existing manual methods of identifying nodules have limitations, such as being time-consuming and erroneous. This study aims to develop an Artificial Intelligence (AI) diagnostic scheme that improves the performance of identifying and categorizing pulmonary nodules using CT scans. The proposed deep learning framework used convolutional neural networks, and the image database totaled 1,056 3D-DICOM CT images. The framework was initially preprocessing, including lung segmentation, nodule detection, and classification. Nodule detection was done using the Retina-UNet model, while the features were classified using a Support Vector Machine (SVM). Performance measures, including accreditation, sensitivity, specificity, and the AUROC, were used to evaluate the model's performance during training and validation. Overall, the developed AI model received an AUROC of 0.9058. The diagnostic accuracy was 90.58%, with an overall positive predictive value of 89% and an overall negative predictive value of 86%. The algorithm effectively handled the CT images at the preprocessing stage, and the deep learning model performed well in detecting and classifying nodules. The application of the new diagnostic framework based on AI algorithms increased the accuracy of the diagnosis compared with the traditional approach. It also provides high reliability for detecting pulmonary nodules and classifying the lesions, thus minimizing intra-observer differences and improving the clinical outcome. In perspective, the advancements may include increasing the size of the annotated data-set and fine-tuning the model due to detection issues of non-solitary nodules.

Accurate and real-time brain tumour detection and classification using optimized YOLOv5 architecture.

Saranya M, Praveena R

pubmed logopapersJul 12 2025
The brain tumours originate in the brain or its surrounding structures, such as the pituitary and pineal glands, and can be benign or malignant. While benign tumours may grow into neighbouring tissues, metastatic tumours occur when cancer from other organs spreads to the brain. This is because identification and staging of such tumours are critical because basically all aspects involving a patient's disease entail accurate diagnosis as well as the staging of the tumour. Image segmentation is incredibly valuable to medical imaging since it can make possible to simulate surgical operations, diseases diagnosis, anatomical and pathologic analysis. This study performs the prediction and classification of brain tumours present in MRI, a combined classification and localization framework model is proposed connecting Fully Convolutional Neural Network (FCNN) and You Only Look Once version 5 (YOLOv5). The FCNN model is designed to classify images into four categories: benign - glial, adenomas and pituitary related, and meningeal. It utilizes a derivative of Root Mean Square Propagation (RMSProp)optimization to boost the classification rate, based upon which the performance was evaluated with the standard measures that are precision, recall, F1 coefficient, specificity and accuracy. Subsequently, the YOLOv5 architectural design for more accurate detection of tumours is incorporated, with the subsequent use of FCNN for creation of the segmentation's masks of the tumours. Thus, the analysis proves that the suggested approach has more accuracy than the existing system with 98.80% average accuracy in the identification and categorization of brain tumour. This integration of detection and segmentation models presents one of the most effective techniques for enhancing the diagnostic performance of the system to add value within the medical imaging field. On the basis of these findings, it becomes possible to conclude that the advancements in the deep learning structures could apparently improve the tumour diagnosis while contributing to the finetuning of the clinical management.

The REgistry of Flow and Perfusion Imaging for Artificial INtelligEnce with PET (REFINE PET): Rationale and Design.

Ramirez G, Lemley M, Shanbhag A, Kwiecinski J, Miller RJH, Kavanagh PB, Liang JX, Dey D, Slipczuk L, Travin MI, Alexanderson E, Carvajal-Juarez I, Packard RRS, Al-Mallah M, Einstein AJ, Feher A, Acampa W, Knight S, Le VT, Mason S, Sanghani R, Wopperer S, Chareonthaitawee P, Buechel RR, Rosamond TL, deKemp RA, Berman DS, Di Carli MF, Slomka PJ

pubmed logopapersJul 11 2025
The REgistry of Flow and Perfusion Imaging for Artificial INtelligEnce with PET (REFINE PET) was established to aggregate PET and associated computed tomography (CT) images with clinical data from hospitals around the world into one comprehensive research resource. REFINE PET is a multicenter, international registry that contains both clinical and imaging data. The PET scans were processed using QPET software (Cedars-Sinai Medical Center, Los Angeles, CA), while the CT scans were processed using deep learning (DL) to detect coronary artery calcium (CAC). Patients were followed up for the occurrence of major adverse cardiovascular events (MACE), which include death, myocardial infarction, unstable angina, and late revascularization (>90 days from PET). The REFINE PET registry currently contains data for 35,588 patients from 14 sites, with additional patient data and sites anticipated. Comprehensive clinical data (including demographics, medical history, and stress test results) were integrated with more than 2200 imaging variables across 42 categories. The registry is poised to address a broad range of clinical questions, supported by correlating invasive angiography (within 6 months of MPI) in 5972 patients and a total of 9252 major adverse cardiovascular events during a median follow-up of 4.2 years. The REFINE PET registry leverages the integration of clinical, multimodality imaging, and novel quantitative and AI tools to advance the role of PET/CT MPI in diagnosis and risk stratification.

Oriented tooth detection: a CBCT image processing method integrated with RoI transformer.

Zhao Z, Wu B, Su S, Liu D, Wu Z, Gao R, Zhang N

pubmed logopapersJul 11 2025
Cone beam computed tomography (CBCT) has revolutionized dental imaging due to its high spatial resolution and ability to provide detailed three-dimensional reconstructions of dental structures. This study introduces an innovative CBCT image processing method using an oriented object detection approach integrated with a Region of Interest (RoI) Transformer. This study addresses the challenge of accurate tooth detection and classification in PAN derived from CBCT, introducing an innovative oriented object detection approach, which has not been previously applied in dental imaging. This method better aligns with the natural growth patterns of teeth, allowing for more accurate detection and classification of molars, premolars, canines, and incisors. By integrating RoI transformer, the model demonstrates relatively acceptable performance metrics compared to conventional horizontal detection methods, while also offering enhanced visualization capabilities. Furthermore, post-processing techniques, including distance and grayscale value constraints, are employed to correct classification errors and reduce false positives, especially in areas with missing teeth. The experimental results indicate that the proposed method achieves an accuracy of 98.48%, a recall of 97.21%, an F1 score of 97.21%, and an mAP of 98.12% in tooth detection. The proposed method enhances the accuracy of tooth detection in CBCT-derived PAN by reducing background interference and improving the visualization of tooth orientation.

Enhanced Detection of Prostate Cancer Lesions on Biparametric MRI Using Artificial Intelligence: A Multicenter, Fully-crossed, Multi-reader Multi-case Trial.

Xing Z, Chen J, Pan L, Huang D, Qiu Y, Sheng C, Zhang Y, Wang Q, Cheng R, Xing W, Ding J

pubmed logopapersJul 11 2025
To assess artificial intelligence (AI)'s added value in detecting prostate cancer lesions on MRI by comparing radiologists' performance with and without AI assistance. A fully-crossed multi-reader multi-case clinical trial was conducted across three institutions with 10 non-expert radiologists. Biparametric MRI cases comprising T2WI, diffusion-weighted images, and apparent diffusion coefficient were retrospectively collected. Three reading modes were evaluated: AI alone, radiologists alone (unaided), and radiologists with AI (aided). Aided and unaided readings were compared using the Dorfman-Berbaum-Metz method. Reference standards were established by senior radiologists based on pathological reports. Performance was quantified via sensitivity, specificity, and area under the alternative free-response receiver operating characteristic curve (AFROC-AUC). Among 407 eligible male patients (69.5±9.3years), aided reading significantly improved lesion-level sensitivity from 67.3% (95% confidence intervals [CI]: 58.8%, 75.8%) to 85.5% (95% CI: 81.3%, 89.7%), with a substantial difference of 18.2% (95% CI: 10.7%, 25.7%, p<0.001). Case-level specificity increased from 75.9% (95% CI: 68.7%, 83.1%) to 79.5% (95% CI: 74.1%, 84.8%), demonstrating non-inferiority (p<0.001). AFROC-AUC was also higher for aided than unaided reading (86.9% vs 76.1%, p<0.001). AI alone achieved robust performance (AFROC-AUC=83.1%, 95%CI: 79.7%, 86.6%), with lesion-level sensitivity of 88.4% (95% CI: 84.0%, 92.0%) and case-level specificity of 77.8% (95% CI: 71.5%, 83.3%). Subgroup analysis revealed improved detection for lesions with smaller size and lower prostate imaging reporting and data system scores. AI-aided reading significantly enhances lesion detection compared to unaided reading, while AI alone also demonstrates high diagnostic accuracy.

Artificial Intelligence for Low-Dose CT Lung Cancer Screening: Comparison of Utilization Scenarios.

Lee M, Hwang EJ, Lee JH, Nam JG, Lim WH, Park H, Park CM, Choi H, Park J, Goo JM

pubmed logopapersJul 10 2025
<b>BACKGROUND</b>. Artificial intelligence (AI) tools for evaluating low-dose CT (LDCT) lung cancer screening examinations are used predominantly for assisting radiologists' interpretations. Alternate utilization scenarios (e.g., use of AI as a prescreener or backup) warrant consideration. <b>OBJECTIVE</b>. The purpose of this study was to evaluate the impact of different AI utilization scenarios on diagnostic outcomes and interpretation times for LDCT lung cancer screening. <b>METHODS</b>. This retrospective study included 366 individuals (358 men, 8 women; mean age, 64 years) who underwent LDCT from May 2017 to December 2017 as part of an earlier prospective lung cancer screening trial. Examinations were interpreted by one of five readers, who reviewed their assigned cases in two sessions (with and without a commercial AI computer-aided detection tool). These interpretations were used to reconstruct simulated AI utilization scenarios: as an assistant (i.e., radiologists interpret all examinations with AI assistance), as a prescreener (i.e., radiologists only interpret examinations with a positive AI result), or as backup (i.e., radiologists reinterpret examinations when AI suggests a missed finding). A group of thoracic radiologists determined the reference standard. Diagnostic outcomes and mean interpretation times were assessed. Decision-curve analysis was performed. <b>RESULTS</b>. Compared with interpretation without AI (recall rate, 22.1%; per-nodule sensitivity, 64.2%; per-examination specificity, 88.8%; mean interpretation time, 164 seconds), AI as an assistant showed higher recall rate (30.3%; <i>p</i> < .001), lower per-examination specificity (81.1%), and no significant change in per-nodule sensitivity (64.8%; <i>p</i> = .86) or mean interpretation time (161 seconds; <i>p</i> = .48); AI as a prescreener showed lower recall rate (20.8%; <i>p</i> = .02) and mean interpretation time (143 seconds; <i>p</i> = .001), higher per-examination specificity (90.3%; <i>p</i> = .04), and no significant difference in per-nodule sensitivity (62.9%; <i>p</i> = .16); and AI as a backup showed increased recall rate (33.6%; <i>p</i> < .001), per-examination sensitivity (66.4%; <i>p</i> < .001), and mean interpretation time (225 seconds; <i>p</i> = .001), with lower per-examination specificity (79.9%; <i>p</i> < .001). Among scenarios, only AI as a prescreener demonstrated higher net benefit than interpretation without AI; AI as an assistant had the least net benefit. <b>CONCLUSION</b>. Different AI implementation approaches yield varying outcomes. The findings support use of AI as a prescreener as the preferred scenario. <b>CLINICAL IMPACT</b>. An approach whereby radiologists only interpret LDCT examinations with a positive AI result can reduce radiologists' workload while preserving sensitivity.

Attend-and-Refine: Interactive keypoint estimation and quantitative cervical vertebrae analysis for bone age assessment

Jinhee Kim, Taesung Kim, Taewoo Kim, Dong-Wook Kim, Byungduk Ahn, Yoon-Ji Kim, In-Seok Song, Jaegul Choo

arxiv logopreprintJul 10 2025
In pediatric orthodontics, accurate estimation of growth potential is essential for developing effective treatment strategies. Our research aims to predict this potential by identifying the growth peak and analyzing cervical vertebra morphology solely through lateral cephalometric radiographs. We accomplish this by comprehensively analyzing cervical vertebral maturation (CVM) features from these radiographs. This methodology provides clinicians with a reliable and efficient tool to determine the optimal timings for orthodontic interventions, ultimately enhancing patient outcomes. A crucial aspect of this approach is the meticulous annotation of keypoints on the cervical vertebrae, a task often challenged by its labor-intensive nature. To mitigate this, we introduce Attend-and-Refine Network (ARNet), a user-interactive, deep learning-based model designed to streamline the annotation process. ARNet features Interaction-guided recalibration network, which adaptively recalibrates image features in response to user feedback, coupled with a morphology-aware loss function that preserves the structural consistency of keypoints. This novel approach substantially reduces manual effort in keypoint identification, thereby enhancing the efficiency and accuracy of the process. Extensively validated across various datasets, ARNet demonstrates remarkable performance and exhibits wide-ranging applicability in medical imaging. In conclusion, our research offers an effective AI-assisted diagnostic tool for assessing growth potential in pediatric orthodontics, marking a significant advancement in the field.
Page 9 of 34334 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.