Sort by:
Page 456 of 7497488 results

Kaneko Y, Miwa K, Yamao T, Miyaji N, Nishii R, Yamazaki K, Nishikawa N, Yusa M, Higashi T

pubmed logopapersJul 15 2025
Organ segmentation using <sup>18</sup>F-FDG PET images alone has not been extensively explored. Segmentation based methods based on deep learning (DL) have traditionally relied on CT or MRI images, which are vulnerable to alignment issues and artifacts. This study aimed to develop a DL approach for segmenting the entire liver based solely on <sup>18</sup>F-FDG PET images. We analyzed data from 120 patients who were assessed using <sup>18</sup>F-FDG PET. A three-dimensional (3D) U-Net model from nnUNet and preprocessed PET images served as DL and input images, respectively, for the model. The model was trained with 5-fold cross-validation on data from 100 patients, and segmentation accuracy was evaluated on an independent test set of 20 patients. Accuracy was assessed using Intersection over Union (IoU), Dice coefficient, and liver volume. Image quality was evaluated using mean (SUVmean) and maximum (SUVmax) standardized uptake value and signal-to-noise ratio (SNR). The model achieved an average IoU of 0.89 and an average Dice coefficient of 0.94 based on test data from 20 patients, indicating high segmentation accuracy. No significant discrepancies in image quality metrics were identified compared with ground truth. Liver regions were accurately extracted from <sup>18</sup>F-FDG PET images which allowed rapid and stable evaluation of liver uptake in individual patients without the need for CT or MRI assessments.

Singh M, Tripathi U, Patel KK, Mohit K, Pathak S

pubmed logopapersJul 15 2025
Cervical vertebrae fractures pose a significant risk to a patient's health. The accurate diagnosis and prompt treatment need to be provided for effective treatment. Moreover, the automated analysis of the cervical vertebrae fracture is of utmost important, as deep learning models have been widely used and play significant role in identification and classification. In this paper, we propose a novel hybrid transfer learning approach for the identification and classification of fractures in axial CT scan slices of the cervical spine. We utilize the publicly available RSNA (Radiological Society of North America) dataset of annotated cervical vertebrae fractures for our experiments. The CT scan slices undergo preprocessing and analysis to extract features, employing four distinct pre-trained transfer learning models to detect abnormalities in the cervical vertebrae. The top-performing model, Inception-ResNet-v2, is combined with the upsampling component of U-Net to form a hybrid architecture. The hybrid model demonstrates superior performance over traditional deep learning models, achieving an overall accuracy of 98.44% on 2,984 test CT scan slices, which represents a 3.62% improvement over the 95% accuracy of predictions made by radiologists. This study advances clinical decision support systems, equipping medical professionals with a powerful tool for timely intervention and accurate diagnosis of cervical vertebrae fractures, thereby enhancing patient outcomes and healthcare efficiency.

Huang X, Liu T, Yu Y

pubmed logopapersJul 15 2025
Nowadays, breast cancer is one of the leading causes of death among women. This highlights the need for precise X-ray image analysis in the medical and imaging fields. In this study, we present an advanced perceptual deep learning framework that extracts key features from large X-ray datasets, mimicking human visual perception. We begin by using a large dataset of breast cancer images and apply the BING objectness measure to identify relevant visual and semantic patches. To manage the large number of object-aware patches, we propose a new ranking technique in the weak annotation context. This technique identifies the patches that are most aligned with human visual judgment. These key patches are then aggregated to extract meaningful features from each image. We leverage these features to train a multi-class SVM classifier, which categorizes the images into various breast cancer stages. The effectiveness of our deep learning model is demonstrated through extensive comparative analysis and visual examples.

Dinnessen R, Peeters D, Antonissen N, Mohamed Hoesein FAA, Gietema HA, Scholten ET, Schaefer-Prokop C, Jacobs C

pubmed logopapersJul 15 2025
To test the performance of a DL model developed and validated for screen-detected pulmonary nodules on incidental nodules detected in a clinical setting. A retrospective dataset of incidental pulmonary nodules sized 5-15 mm was collected, and a subset of size-matched solid nodules was selected. The performance of the DL model was compared to the Brock model. AUCs with 95% CIs were compared using the DeLong method. Sensitivity and specificity were determined at various thresholds, using a 10% threshold for the Brock model as reference. The model's calibration was visually assessed. The dataset included 49 malignant and 359 benign solid or part-solid nodules, and the size-matched dataset included 47 malignant and 47 benign solid nodules. In the complete dataset, AUCs [95% CI] were 0.89 [0.85, 0.93] for the DL model and 0.86 [0.81, 0.92] for the Brock model (p = 0.27). In the size-matched subset, AUCs of the DL and Brock models were 0.78 [0.69, 0.88] and 0.58 [0.46, 0.69] (p < 0.01), respectively. At a 10% threshold, the Brock model had a sensitivity of 0.49 [0.35, 0.63] and a specificity of 0.92 [0.89, 0.94]. At a threshold of 17%, the DL model matched the specificity of the Brock model at the 10% threshold, but had a higher sensitivity (0.57 [0.43, 0.71]). Calibration analysis revealed that the DL model overestimated the malignancy probability. The DL model demonstrated good discriminatory performance in a dataset of incidental nodules and outperformed the Brock model, but may need recalibration for clinical practice. Question What is the performance of a DL model for pulmonary nodule malignancy risk estimation developed on screening data in a dataset of incidentally detected nodules? Findings The DL model performed well on a dataset of nodules from clinical routine care and outperformed the Brock model in a size-matched subset. Clinical relevance This study provides further evidence about the potential of DL models for risk stratification of incidental nodules, which may improve nodule management in routine clinical practice.

Murugan K, Palanisamy S, Sathishkumar N, Alshalali TAN

pubmed logopapersJul 15 2025
Brain tumor segmentation plays a crucial role in clinical diagnostics and treatment planning, yet accurate and efficient segmentation remains a significant challenge due to complex tumor structures and variations in imaging modalities. Multi-feature selection and region classification depend on continuous homogeneous features to improve the precision of tumor detection. This classification is required to suppress the discreteness across various extraction rates to consent to the smallest segmentation region that is infected. This study proposes a Finite Segmentation Model (FSM) with Improved Classifier Learning (ICL) to enhance segmentation accuracy in Positron Emission Tomography (PET) images. The FSM-ICL framework integrates advanced textural feature extraction, deep learning-based classification, and an adaptive segmentation approach to differentiate between tumor and non-tumor regions with high precision. Our model is trained and validated on the Synthetic Whole-Head Brain Tumor Segmentation Dataset, consisting of 1000 training and 426 testing images, achieving a segmentation accuracy of 92.57%, significantly outperforming existing approaches such as NRAN (62.16%), DSSE-V-Net (71.47%), and DenseUNet+ (83.93%). Furthermore, FSM-ICL enhances classification precision to 95.59%, reduces classification error to 5.67%, and minimizes classification time to 572.39 ms, demonstrating a 10.09% improvement in precision and a 10.96% boost in classification rates over state-of-the-art methods. The hybrid classifier learning approach effectively addresses segmentation discreteness, ensuring continuous and discrete tumor region detection with superior feature differentiation. This work has significant implications for automated tumor detection, personalized treatment strategies, and AI-driven medical imaging advancements. Future directions include incorporating micro-segmentation and pre-classification techniques to further optimize performance in dense pixel-packed datasets.

Fei B, Li Y, Yang W, Gao H, Xu J, Ma L, Yang Y, Zhou P

pubmed logopapersJul 15 2025
The development of medical imaging techniques has made a significant contribution to clinical decision-making. However, the existence of suboptimal imaging quality, as indicated by irregular illumination or imbalanced intensity, presents significant obstacles in automating disease screening, analysis, and diagnosis. Existing approaches for natural image enhancement are mostly trained with numerous paired images, presenting challenges in data collection and training costs, all while lacking the ability to generalize effectively. Here, we introduce a pioneering training-free Diffusion Model for Universal Medical Image Enhancement, named UniMIE. UniMIE demonstrates its unsupervised enhancement capabilities across various medical image modalities without the need for any fine-tuning. It accomplishes this by relying solely on a single pre-trained model from ImageNet. We conduct a comprehensive evaluation on 13 imaging modalities and over 15 medical types, demonstrating better qualities, robustness, and accuracy than other modality-specific and data-inefficient models. By delivering high-quality enhancement and corresponding accuracy downstream tasks across a wide range of tasks, UniMIE exhibits considerable potential to accelerate the advancement of diagnostic tools and customized treatment plans. UniMIE represents a transformative approach to medical image enhancement, offering a versatile and robust solution that adapts to diverse imaging conditions. By improving image quality and facilitating better downstream analyses, UniMIE has the potential to revolutionize clinical workflows and enhance diagnostic accuracy across a wide range of medical applications.

Sexauer R, Riehle F, Borkowski K, Ruppert C, Potthast S, Schmidt N

pubmed logopapersJul 15 2025
Enhance mammography quality to increase cancer detection by implementing continuous AI-driven feedback mechanisms, ensuring reliable, consistent, and high-quality screening by the 'Perfect', 'Good', 'Moderate', and 'Inadequate' (PGMI) criteria. To assess the impact of the AI software 'b-box<sup>TM</sup>' on mammography quality, we conducted a comparative analysis of PGMI scores. We evaluated scores 50 days before (A) and after the software's implementation in 2021 (B), along with assessments made in the first week of August 2022 (C1) and 2023 (C2), comparing them to evaluations conducted by two readers. Except for postsurgical patients, we included all diagnostic and screening mammograms from one tertiary hospital. A total of 4577 mammograms from 1220 women (mean age: 59, range: 21-94, standard deviation: 11.18) were included. 1728 images were obtained before (A) and 2330 images after the 2021 software implementation (B), along with 269 images in 2022 (C1) and 250 images in 2023 (C2). The results indicated a significant improvement in diagnostic image quality (p < 0.01). The percentage of 'Perfect' examinations rose from 22.34% to 32.27%, while 'Inadequate' images decreased from 13.31% to 5.41% in 2021, continuing the positive trend with 4.46% and 3.20% 'inadequate' images in 2022 and 2023, respectively (p < 0.01). Using a reliable software platform to perform AI-driven quality evaluation in real-time has the potential to make lasting improvements in image quality, support radiographers' professional growth, and elevate institutional quality standards and documentation simultaneously. Question How can AI-powered quality assessment reduce inadequate mammographic quality, which is known to impact sensitivity and increase the risk of interval cancers? Findings AI implementation decreased 'inadequate' mammograms from 13.31% to 3.20% and substantially improved parenchyma visualization, with consistent subgroup trends. Clinical relevance By reducing 'inadequate' mammograms and enhancing imaging quality, AI-driven tools improve diagnostic reliability and support better outcomes in breast cancer screening.

Islam U, Ali YA, Al-Razgan M, Ullah H, Almaiah MA, Tariq Z, Wazir KM

pubmed logopapersJul 15 2025
Ultrasound imaging plays an important role in fetal growth and maternal-fetal health evaluation, but due to the complicated anatomy of the fetus and image quality fluctuation, its interpretation is quite challenging. Although deep learning include Convolution Neural Networks (CNNs) have been promising, they have largely been limited to one task or the other, such as the segmentation or detection of fetal structures, thus lacking an integrated solution that accounts for the intricate interplay between anatomical structures. To overcome these limitations, Fetal-Net-a new deep learning architecture that integrates Multi-Scale-CNNs and transformer layers-was developed. The model was trained on a large, expertly annotated set of more than 12,000 ultrasound images across different anatomical planes for effective identification of fetal structures and anomaly detection. Fetal-Net achieved excellent performance in anomaly detection, with precision (96.5%), accuracy (97.5%), and recall (97.8%) showed robustness factor against various imaging settings, making it a potent means of augmenting prenatal care through refined ultrasound image interpretation.

Mousavinasab SM, Hedyehzadeh M, Mousavinasab ST

pubmed logopapersJul 15 2025
This work uses T1, STIR, and T2 MRI sequences of the lumbar vertebrae and BMD measurements to identify osteoporosis using deep learning. An analysis of 1350 MRI images from 50 individuals who had simultaneous BMD and MRI scans was performed. The accuracy of a custom convolution neural network for osteoporosis categorization was assessed using deep learning. T2-weighted MRIs were most diagnostic. The suggested model outperformed T1 and STIR sequences with 88.5% accuracy, 88.9% sensitivity, and 76.1% F1-score. Modern deep learning models like GoogleNet, EfficientNet-B3, ResNet50, InceptionV3, and InceptionResNetV2 were compared to its performance. These designs performed well, but our model was more sensitive and accurate. This research shows that T2-weighted MRI is the best sequence for osteoporosis diagnosis and that deep learning overcomes BMD-based approaches by reducing ionizing radiation. These results support clinical use of deep learning with MRI for safe, accurate, and quick osteoporosis diagnosis.

Krag CH, Müller FC, Gandrup KL, Andersen MB, Møller JM, Liu ML, Rud A, Krabbe S, Al-Farra L, Nielsen M, Kruuse C, Boesen MP

pubmed logopapersJul 15 2025
To assess the prevalence of motion artifacts and the factors associated with them in a cohort of suspected stroke patients, and to determine their impact on diagnostic accuracy for both AI and radiologists. This retrospective cross-sectional study included brain MRI scans of consecutive adult suspected stroke patients from a non-comprehensive Danish stroke center between January and April 2020. An expert neuroradiologist identified acute ischemic, hemorrhagic, and space-occupying lesions as references. Two blinded radiology residents rated MRI image quality and motion artifacts. The diagnostic accuracy of a CE-marked deep learning tool was compared to that of radiology reports. Multivariate analysis examined associations between patient characteristics and motion artifacts. 775 patients (68 years ± 16, 420 female) were included. Acute ischemic, hemorrhagic, and space-occupying lesions were found in 216 (27.9%), 12 (1.5%), and 20 (2.6%). Motion artifacts were present in 57 (7.4%). Increasing age (OR per decade, 1.60; 95% CI: 1.26, 2.09; p < 0.001) and limb motor symptoms (OR, 2.36; 95% CI: 1.32, 4.20; p = 0.003) were independently associated with motion artifacts in multivariate analysis. Motion artifacts significantly reduced the accuracy of detecting hemorrhage. This reduction was greater for the AI tool (from 88 to 67%; p < 0.001) than for radiology reports (from 100 to 93%; p < 0.001). Ischemic and space-occupying lesion detection was not significantly affected. Motion artifacts are common in suspected stroke patients, particularly in the elderly and patients with motor symptoms, reducing accuracy for hemorrhage detection by both AI and radiologists. Question Motion artifacts reduce the quality of MRI scans, but it is unclear which factors are associated with them and how they impact diagnostic accuracy. Findings Motion artifacts occurred in 7% of suspected stroke MRI scans, associated with higher patient age and motor symptoms, lowering hemorrhage detection by AI and radiologists. Clinical relevance Motion artifacts in stroke brain MRIs significantly reduce the diagnostic accuracy of human and AI detection of intracranial hemorrhages. Elderly patients and those with motor symptoms may benefit from a greater focus on motion artifact prevention and reduction.
Page 456 of 7497488 results
Show
per page

Ready to Sharpen Your Edge?

Subscribe to join 7,400+ peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.