Sort by:
Page 119 of 3453445 results

An efficient deep learning based approach for automated identification of cervical vertebrae fracture as a clinical support aid.

Singh M, Tripathi U, Patel KK, Mohit K, Pathak S

pubmed logopapersJul 15 2025
Cervical vertebrae fractures pose a significant risk to a patient's health. The accurate diagnosis and prompt treatment need to be provided for effective treatment. Moreover, the automated analysis of the cervical vertebrae fracture is of utmost important, as deep learning models have been widely used and play significant role in identification and classification. In this paper, we propose a novel hybrid transfer learning approach for the identification and classification of fractures in axial CT scan slices of the cervical spine. We utilize the publicly available RSNA (Radiological Society of North America) dataset of annotated cervical vertebrae fractures for our experiments. The CT scan slices undergo preprocessing and analysis to extract features, employing four distinct pre-trained transfer learning models to detect abnormalities in the cervical vertebrae. The top-performing model, Inception-ResNet-v2, is combined with the upsampling component of U-Net to form a hybrid architecture. The hybrid model demonstrates superior performance over traditional deep learning models, achieving an overall accuracy of 98.44% on 2,984 test CT scan slices, which represents a 3.62% improvement over the 95% accuracy of predictions made by radiologists. This study advances clinical decision support systems, equipping medical professionals with a powerful tool for timely intervention and accurate diagnosis of cervical vertebrae fractures, thereby enhancing patient outcomes and healthcare efficiency.

Learning quality-guided multi-layer features for classifying visual types with ball sports application.

Huang X, Liu T, Yu Y

pubmed logopapersJul 15 2025
Nowadays, breast cancer is one of the leading causes of death among women. This highlights the need for precise X-ray image analysis in the medical and imaging fields. In this study, we present an advanced perceptual deep learning framework that extracts key features from large X-ray datasets, mimicking human visual perception. We begin by using a large dataset of breast cancer images and apply the BING objectness measure to identify relevant visual and semantic patches. To manage the large number of object-aware patches, we propose a new ranking technique in the weak annotation context. This technique identifies the patches that are most aligned with human visual judgment. These key patches are then aggregated to extract meaningful features from each image. We leverage these features to train a multi-class SVM classifier, which categorizes the images into various breast cancer stages. The effectiveness of our deep learning model is demonstrated through extensive comparative analysis and visual examples.

Performance of a screening-trained DL model for pulmonary nodule malignancy estimation of incidental clinical nodules.

Dinnessen R, Peeters D, Antonissen N, Mohamed Hoesein FAA, Gietema HA, Scholten ET, Schaefer-Prokop C, Jacobs C

pubmed logopapersJul 15 2025
To test the performance of a DL model developed and validated for screen-detected pulmonary nodules on incidental nodules detected in a clinical setting. A retrospective dataset of incidental pulmonary nodules sized 5-15 mm was collected, and a subset of size-matched solid nodules was selected. The performance of the DL model was compared to the Brock model. AUCs with 95% CIs were compared using the DeLong method. Sensitivity and specificity were determined at various thresholds, using a 10% threshold for the Brock model as reference. The model's calibration was visually assessed. The dataset included 49 malignant and 359 benign solid or part-solid nodules, and the size-matched dataset included 47 malignant and 47 benign solid nodules. In the complete dataset, AUCs [95% CI] were 0.89 [0.85, 0.93] for the DL model and 0.86 [0.81, 0.92] for the Brock model (p = 0.27). In the size-matched subset, AUCs of the DL and Brock models were 0.78 [0.69, 0.88] and 0.58 [0.46, 0.69] (p < 0.01), respectively. At a 10% threshold, the Brock model had a sensitivity of 0.49 [0.35, 0.63] and a specificity of 0.92 [0.89, 0.94]. At a threshold of 17%, the DL model matched the specificity of the Brock model at the 10% threshold, but had a higher sensitivity (0.57 [0.43, 0.71]). Calibration analysis revealed that the DL model overestimated the malignancy probability. The DL model demonstrated good discriminatory performance in a dataset of incidental nodules and outperformed the Brock model, but may need recalibration for clinical practice. Question What is the performance of a DL model for pulmonary nodule malignancy risk estimation developed on screening data in a dataset of incidentally detected nodules? Findings The DL model performed well on a dataset of nodules from clinical routine care and outperformed the Brock model in a size-matched subset. Clinical relevance This study provides further evidence about the potential of DL models for risk stratification of incidental nodules, which may improve nodule management in routine clinical practice.

Advanced finite segmentation model with hybrid classifier learning for high-precision brain tumor delineation in PET imaging.

Murugan K, Palanisamy S, Sathishkumar N, Alshalali TAN

pubmed logopapersJul 15 2025
Brain tumor segmentation plays a crucial role in clinical diagnostics and treatment planning, yet accurate and efficient segmentation remains a significant challenge due to complex tumor structures and variations in imaging modalities. Multi-feature selection and region classification depend on continuous homogeneous features to improve the precision of tumor detection. This classification is required to suppress the discreteness across various extraction rates to consent to the smallest segmentation region that is infected. This study proposes a Finite Segmentation Model (FSM) with Improved Classifier Learning (ICL) to enhance segmentation accuracy in Positron Emission Tomography (PET) images. The FSM-ICL framework integrates advanced textural feature extraction, deep learning-based classification, and an adaptive segmentation approach to differentiate between tumor and non-tumor regions with high precision. Our model is trained and validated on the Synthetic Whole-Head Brain Tumor Segmentation Dataset, consisting of 1000 training and 426 testing images, achieving a segmentation accuracy of 92.57%, significantly outperforming existing approaches such as NRAN (62.16%), DSSE-V-Net (71.47%), and DenseUNet+ (83.93%). Furthermore, FSM-ICL enhances classification precision to 95.59%, reduces classification error to 5.67%, and minimizes classification time to 572.39 ms, demonstrating a 10.09% improvement in precision and a 10.96% boost in classification rates over state-of-the-art methods. The hybrid classifier learning approach effectively addresses segmentation discreteness, ensuring continuous and discrete tumor region detection with superior feature differentiation. This work has significant implications for automated tumor detection, personalized treatment strategies, and AI-driven medical imaging advancements. Future directions include incorporating micro-segmentation and pre-classification techniques to further optimize performance in dense pixel-packed datasets.

A diffusion model for universal medical image enhancement.

Fei B, Li Y, Yang W, Gao H, Xu J, Ma L, Yang Y, Zhou P

pubmed logopapersJul 15 2025
The development of medical imaging techniques has made a significant contribution to clinical decision-making. However, the existence of suboptimal imaging quality, as indicated by irregular illumination or imbalanced intensity, presents significant obstacles in automating disease screening, analysis, and diagnosis. Existing approaches for natural image enhancement are mostly trained with numerous paired images, presenting challenges in data collection and training costs, all while lacking the ability to generalize effectively. Here, we introduce a pioneering training-free Diffusion Model for Universal Medical Image Enhancement, named UniMIE. UniMIE demonstrates its unsupervised enhancement capabilities across various medical image modalities without the need for any fine-tuning. It accomplishes this by relying solely on a single pre-trained model from ImageNet. We conduct a comprehensive evaluation on 13 imaging modalities and over 15 medical types, demonstrating better qualities, robustness, and accuracy than other modality-specific and data-inefficient models. By delivering high-quality enhancement and corresponding accuracy downstream tasks across a wide range of tasks, UniMIE exhibits considerable potential to accelerate the advancement of diagnostic tools and customized treatment plans. UniMIE represents a transformative approach to medical image enhancement, offering a versatile and robust solution that adapts to diverse imaging conditions. By improving image quality and facilitating better downstream analyses, UniMIE has the potential to revolutionize clinical workflows and enhance diagnostic accuracy across a wide range of medical applications.

Enhancing breast positioning quality through real-time AI feedback.

Sexauer R, Riehle F, Borkowski K, Ruppert C, Potthast S, Schmidt N

pubmed logopapersJul 15 2025
Enhance mammography quality to increase cancer detection by implementing continuous AI-driven feedback mechanisms, ensuring reliable, consistent, and high-quality screening by the 'Perfect', 'Good', 'Moderate', and 'Inadequate' (PGMI) criteria. To assess the impact of the AI software 'b-box<sup>TM</sup>' on mammography quality, we conducted a comparative analysis of PGMI scores. We evaluated scores 50 days before (A) and after the software's implementation in 2021 (B), along with assessments made in the first week of August 2022 (C1) and 2023 (C2), comparing them to evaluations conducted by two readers. Except for postsurgical patients, we included all diagnostic and screening mammograms from one tertiary hospital. A total of 4577 mammograms from 1220 women (mean age: 59, range: 21-94, standard deviation: 11.18) were included. 1728 images were obtained before (A) and 2330 images after the 2021 software implementation (B), along with 269 images in 2022 (C1) and 250 images in 2023 (C2). The results indicated a significant improvement in diagnostic image quality (p < 0.01). The percentage of 'Perfect' examinations rose from 22.34% to 32.27%, while 'Inadequate' images decreased from 13.31% to 5.41% in 2021, continuing the positive trend with 4.46% and 3.20% 'inadequate' images in 2022 and 2023, respectively (p < 0.01). Using a reliable software platform to perform AI-driven quality evaluation in real-time has the potential to make lasting improvements in image quality, support radiographers' professional growth, and elevate institutional quality standards and documentation simultaneously. Question How can AI-powered quality assessment reduce inadequate mammographic quality, which is known to impact sensitivity and increase the risk of interval cancers? Findings AI implementation decreased 'inadequate' mammograms from 13.31% to 3.20% and substantially improved parenchyma visualization, with consistent subgroup trends. Clinical relevance By reducing 'inadequate' mammograms and enhancing imaging quality, AI-driven tools improve diagnostic reliability and support better outcomes in breast cancer screening.

Fetal-Net: enhancing Maternal-Fetal ultrasound interpretation through Multi-Scale convolutional neural networks and Transformers.

Islam U, Ali YA, Al-Razgan M, Ullah H, Almaiah MA, Tariq Z, Wazir KM

pubmed logopapersJul 15 2025
Ultrasound imaging plays an important role in fetal growth and maternal-fetal health evaluation, but due to the complicated anatomy of the fetus and image quality fluctuation, its interpretation is quite challenging. Although deep learning include Convolution Neural Networks (CNNs) have been promising, they have largely been limited to one task or the other, such as the segmentation or detection of fetal structures, thus lacking an integrated solution that accounts for the intricate interplay between anatomical structures. To overcome these limitations, Fetal-Net-a new deep learning architecture that integrates Multi-Scale-CNNs and transformer layers-was developed. The model was trained on a large, expertly annotated set of more than 12,000 ultrasound images across different anatomical planes for effective identification of fetal structures and anomaly detection. Fetal-Net achieved excellent performance in anomaly detection, with precision (96.5%), accuracy (97.5%), and recall (97.8%) showed robustness factor against various imaging settings, making it a potent means of augmenting prenatal care through refined ultrasound image interpretation.

Deep Learning for Osteoporosis Diagnosis Using Magnetic Resonance Images of Lumbar Vertebrae.

Mousavinasab SM, Hedyehzadeh M, Mousavinasab ST

pubmed logopapersJul 15 2025
This work uses T1, STIR, and T2 MRI sequences of the lumbar vertebrae and BMD measurements to identify osteoporosis using deep learning. An analysis of 1350 MRI images from 50 individuals who had simultaneous BMD and MRI scans was performed. The accuracy of a custom convolution neural network for osteoporosis categorization was assessed using deep learning. T2-weighted MRIs were most diagnostic. The suggested model outperformed T1 and STIR sequences with 88.5% accuracy, 88.9% sensitivity, and 76.1% F1-score. Modern deep learning models like GoogleNet, EfficientNet-B3, ResNet50, InceptionV3, and InceptionResNetV2 were compared to its performance. These designs performed well, but our model was more sensitive and accurate. This research shows that T2-weighted MRI is the best sequence for osteoporosis diagnosis and that deep learning overcomes BMD-based approaches by reducing ionizing radiation. These results support clinical use of deep learning with MRI for safe, accurate, and quick osteoporosis diagnosis.

Motion artifacts and image quality in stroke MRI: associated factors and impact on AI and human diagnostic accuracy.

Krag CH, Müller FC, Gandrup KL, Andersen MB, Møller JM, Liu ML, Rud A, Krabbe S, Al-Farra L, Nielsen M, Kruuse C, Boesen MP

pubmed logopapersJul 15 2025
To assess the prevalence of motion artifacts and the factors associated with them in a cohort of suspected stroke patients, and to determine their impact on diagnostic accuracy for both AI and radiologists. This retrospective cross-sectional study included brain MRI scans of consecutive adult suspected stroke patients from a non-comprehensive Danish stroke center between January and April 2020. An expert neuroradiologist identified acute ischemic, hemorrhagic, and space-occupying lesions as references. Two blinded radiology residents rated MRI image quality and motion artifacts. The diagnostic accuracy of a CE-marked deep learning tool was compared to that of radiology reports. Multivariate analysis examined associations between patient characteristics and motion artifacts. 775 patients (68 years ± 16, 420 female) were included. Acute ischemic, hemorrhagic, and space-occupying lesions were found in 216 (27.9%), 12 (1.5%), and 20 (2.6%). Motion artifacts were present in 57 (7.4%). Increasing age (OR per decade, 1.60; 95% CI: 1.26, 2.09; p < 0.001) and limb motor symptoms (OR, 2.36; 95% CI: 1.32, 4.20; p = 0.003) were independently associated with motion artifacts in multivariate analysis. Motion artifacts significantly reduced the accuracy of detecting hemorrhage. This reduction was greater for the AI tool (from 88 to 67%; p < 0.001) than for radiology reports (from 100 to 93%; p < 0.001). Ischemic and space-occupying lesion detection was not significantly affected. Motion artifacts are common in suspected stroke patients, particularly in the elderly and patients with motor symptoms, reducing accuracy for hemorrhage detection by both AI and radiologists. Question Motion artifacts reduce the quality of MRI scans, but it is unclear which factors are associated with them and how they impact diagnostic accuracy. Findings Motion artifacts occurred in 7% of suspected stroke MRI scans, associated with higher patient age and motor symptoms, lowering hemorrhage detection by AI and radiologists. Clinical relevance Motion artifacts in stroke brain MRIs significantly reduce the diagnostic accuracy of human and AI detection of intracranial hemorrhages. Elderly patients and those with motor symptoms may benefit from a greater focus on motion artifact prevention and reduction.

Preoperative prediction value of 2.5D deep learning model based on contrast-enhanced CT for lymphovascular invasion of gastric cancer.

Sun X, Wang P, Ding R, Ma L, Zhang H, Zhu L

pubmed logopapersJul 15 2025
To develop and validate artificial intelligence models based on contrast-enhanced CT(CECT) images of venous phase using deep learning (DL) and Radiomics approaches to predict lymphovascular invasion in gastric cancer prior to surgery. We retrospectively analyzed data from 351 gastric cancer patients, randomly splitting them into two cohorts (training cohort, n = 246; testing cohort, n = 105) in a 7:3 ratio. The tumor region of interest (ROI) was outlined on venous phase CT images as the input for the development of radiomics, 2D and 3D DL models (DL2D and DL3D). Of note, by centering the analysis on the tumor's maximum cross-section and incorporating seven adjacent 2D images, we generated stable 2.5D data to establish a multi-instance learning (MIL) model. Meanwhile, the clinical and feature-combined models which integrated traditional CT enhancement parameters (Ratio), radiomics, and MIL features were also constructed. Models' performance was evaluated by the area under the curve (AUC), confusion matrices, and detailed metrics, such as sensitivity and specificity. A nomogram based on the combined model was established and applied to clinical practice. The calibration curve was used to evaluate the consistency between the predicted LVI of each model and the actual LVI of gastric cancer, and the decision curve analysis (DCA) was used to evaluate the net benefit of each model. Among the developed models, 2.5D MIL and combined models exhibited the superior performance in comparison to the clinical model, the radiomics model, the DL2D model, and the DL3D model as evidenced by the AUC values of 0.820, 0.822, 0.748, 0.725, 0.786, and 0.711 on testing set, respectively. Additionally, the 2.5D MIL and combined models also showed good calibration for LVI prediction, and could provide a net clinical benefit when the threshold probability ranged from 0.31 to 0.98, and from 0.28 to 0.84, indicating their clinical usefulness. The MIL and combined models highlight their performance in predicting preoperative lymphovascular invasion in gastric cancer, offering valuable insights for clinicians in selecting appropriate treatment options for gastric cancer patients.
Page 119 of 3453445 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.