Sort by:
Page 205 of 3203198 results

The value of intratumoral and peritumoral ultrasound radiomics model constructed using multiple machine learning algorithms for non-mass breast cancer.

Liu J, Chen J, Qiu L, Li R, Li Y, Li T, Leng X

pubmed logopapersJun 6 2025
To investigate the diagnostic capability of multiple machine learning algorithms combined with intratumoral and peritumoral ultrasound radiomics models for non-massive breast cancer in dense breast backgrounds. Manual segmentation of ultrasound images was performed to define the intratumoral region of interest (ROI), and five peritumoral ROIs were generated by extending the contours by 1 to 5 mm. A total of 851 radiomics features were extracted from these regions and filtered using statistical methods. Thirteen machine learning algorithms were employed to create radiomics models for the intratumoral and peritumoral areas. The best model was combined with clinical ultrasound predictive factors to form a joint model, which was evaluated using ROC curves, calibration curves, and decision curve analysis (DCA).Based on this model, a nomogram was developed, demonstrating high predictive performance, with C-index values of 0.982 and 0.978.The model incorporating the intratumoral and peritumoral 2 mm regions outperformed other models, indicating its effectiveness in distinguishing between benign and malignant breast lesions. This study concludes that ultrasound imaging, particularly in the intratumoral and peritumoral 2 mm regions, has significant potential for diagnosing non-massive breast cancer, and the nomogram can assist clinical decision-making.

Inconsistency of AI in intracranial aneurysm detection with varying dose and image reconstruction.

Goelz L, Laudani A, Genske U, Scheel M, Bohner G, Bauknecht HC, Mutze S, Hamm B, Jahnke P

pubmed logopapersJun 6 2025
Scanner-related changes in data quality are common in medical imaging, yet monitoring their impact on diagnostic AI performance remains challenging. In this study, we performed standardized consistency testing of an FDA-cleared and CE-marked AI for triage and notification of intracranial aneurysms across changes in image data quality caused by dose and image reconstruction. Our assessment was based on repeated examinations of a head CT phantom designed for AI evaluation, replicating a patient with three intracranial aneurysms in the anterior, middle and posterior circulation. We show that the AI maintains stable performance within the medium dose range but produces inconsistent results at reduced dose and, unexpectedly, at higher dose when filtered back projection is used. Data quality standards required for AI are stricter than those for neuroradiologists, who report higher aneurysm visibility rates and experience performance degradation only at substantially lower doses, with no decline at higher doses.

UANV: UNet-based attention network for thoracolumbar vertebral compression fracture angle measurement.

Lee Y, Kim J, Lee KC, An S, Cho Y, Ahn KS, Hur JW

pubmed logopapersJun 6 2025
Kyphosis is a prevalent spinal condition where the spine curves in the sagittal plane, resulting in spine deformities. Curvature estimation provides a powerful index to assess the deformation severity of scoliosis. In current clinical diagnosis, the standard curvature estimation method for quantitatively assessing the curvature is performed by measuring the vertebral angle, which is the angle between two lines, drawn perpendicular to the upper and lower endplates of the involved vertebra. However, manual Cobb angle measurement requires considerable time and effort, along with associated problems such as interobserver and intraobserver variations. Hence, in this study, we propose UNet-based Attention Network for Thoracolumbar Vertebral Compression Fracture Angle (UANV), a vertebra angle measuring model using lateral spinal X-ray based on a deep convolutional neural network (CNN). Specifically, we considered the detailed shape of each vertebral body with an attention mechanism and then recorded each edge of each vertebra to calculate vertebrae angles.

Predicting infarct outcomes after extended time window thrombectomy in large vessel occlusion using knowledge guided deep learning.

Dai L, Yuan L, Zhang H, Sun Z, Jiang J, Li Z, Li Y, Zha Y

pubmed logopapersJun 6 2025
Predicting the final infarct after an extended time window mechanical thrombectomy (MT) is beneficial for treatment planning in acute ischemic stroke (AIS). By introducing guidance from prior knowledge, this study aims to improve the accuracy of the deep learning model for post-MT infarct prediction using pre-MT brain perfusion data. This retrospective study collected CT perfusion data at admission for AIS patients receiving MT over 6 hours after symptom onset, from January 2020 to December 2024, across three centers. Infarct on post-MT diffusion weighted imaging served as ground truth. Five Swin transformer based models were developed for post-MT infarct segmentation using pre-MT CT perfusion parameter maps: BaselineNet served as the basic model for comparative analysis, CollateralFlowNet included a collateral circulation evaluation score, InfarctProbabilityNet incorporated infarct probability mapping, ArterialTerritoryNet was guided by artery territory mapping, and UnifiedNet combined all prior knowledge sources. Model performance was evaluated using the Dice coefficient and intersection over union (IoU). A total of 221 patients with AIS were included (65.2% women) with a median age of 73 years. Baseline ischemic core based on CT perfusion threshold achieved a Dice coefficient of 0.50 and IoU of 0.33. BaselineNet improved to a Dice coefficient of 0.69 and IoU of 0.53. Compared with BaselineNet, models incorporating medical knowledge demonstrated higher performance: CollateralFlowNet (Dice coefficient 0.72, IoU 0.56), InfarctProbabilityNet (Dice coefficient 0.74, IoU 0.58), ArterialTerritoryNet (Dice coefficient 0.75, IoU 0.60), and UnifiedNet (Dice coefficient 0.82, IoU 0.71) (all P<0.05). In this study, integrating medical knowledge into deep learning models enhanced the accuracy of infarct predictions in AIS patients undergoing extended time window MT.

Quasi-supervised MR-CT image conversion based on unpaired data.

Zhu R, Ruan Y, Li M, Qian W, Yao Y, Teng Y

pubmed logopapersJun 6 2025
In radiotherapy planning, acquiring both magnetic resonance (MR) and computed tomography (CT) images is crucial for comprehensive evaluation and treatment. However, simultaneous acquisition of MR and CT images is time-consuming, economically expensive, and involves ionizing radiation, which poses health risks to patients. The objective of this study is to generate CT images from radiation-free MR images using a novel quasi-supervised learning framework. In this work, we propose a quasi-supervised framework to explore the underlying relationship between unpaired MR and CT images. Normalized mutual information (NMI) is employed as a similarity metric to evaluate the correspondence between MR and CT scans. To establish optimal pairings, we compute an NMI matrix across the training set and apply the Hungarian algorithm for global matching. The resulting MR-CT pairs, along with their NMI scores, are treated as prior knowledge and integrated into the training process to guide the MR-to-CT image translation model. Experimental results indicate that the proposed method significantly outperforms existing unsupervised image synthesis methods in terms of both image quality and consistency of image features during the MR to CT image conversion process. The generated CT images show a higher degree of accuracy and fidelity to the original MR images, ensuring better preservation of anatomical details and structural integrity. This study proposes a quasi-supervised framework that converts unpaired MR and CT images into structurally consistent pseudo-pairs, providing informative priors to enhance cross-modality image synthesis. This strategy not only improves the accuracy and reliability of MR-CT conversion, but also reduces reliance on costly and scarce paired datasets. The proposed framework offers a practical 1 and scalable solution for real-world medical imaging applications, where paired annotations are often unavailable.

The Predictive Value of Multiparameter Characteristics of Coronary Computed Tomography Angiography for Coronary Stent Implantation.

Xu X, Wang Y, Yang T, Wang Z, Chu C, Sun L, Zhao Z, Li T, Yu H, Wang X, Song P

pubmed logopapersJun 6 2025
This study aims to evaluate the predictive value of multiparameter characteristics of coronary computed tomography angiography (CCTA) plaque and the ratio of coronary artery volume to myocardial mass (V/M) in guiding percutaneous coronary stent implantation (PCI) in patients diagnosed with unstable angina. Patients who underwent CCTA and coronary angiography (CAG) within 2 months were retrospectively analyzed. According to CAG results, patients were divided into a medical therapy group (n=41) and a PCI revascularization group (n=37). The plaque characteristics and V/M were quantitatively evaluated. The parameters included minimum lumen area at stenosis (MLA), maximum area stenosis (MAS), maximum diameter stenosis (MDS), total plaque burden (TPB), plaque length, plaque volume, and each component volume within the plaque. Fractional flow reserve (FFR) and pericoronary fat attenuation index (FAI) were calculated based on CCTA. Artificial intelligence software was employed to compare the differences in each parameter between the 2 groups at both the vessel and plaque levels. The PCI group had higher MAS, MDS, TPB, FAI, noncalcified plaque volume and lipid plaque volume, and significantly lower V/M, MLA, and CT-derived fractional flow reserve (FFRCT). V/M, TPB, MLA, FFRCT, and FAI are important influencing factors of PCI. The combined model of MLA, FFRCT, and FAI had the largest area under the ROC curve (AUC=0.920), and had the best performance in predicting PCI. The integration of AI-derived multiparameter features from one-stop CCTA significantly enhances the accuracy of predicting PCI in angina pectoris patients, evaluating at the plaque, vessel, and patient levels.

Advances in disease detection through retinal imaging: A systematic review.

Bilal H, Keles A, Bendechache M

pubmed logopapersJun 6 2025
Ocular and non-ocular diseases significantly impact millions of people worldwide, leading to vision impairment or blindness if not detected and managed early. Many individuals could be prevented from becoming blind by treating these diseases early on and stopping their progression. Despite advances in medical imaging and diagnostic tools, the manual detection of these diseases remains labor-intensive, time-consuming, and dependent on the expert's experience. Computer-aided diagnosis (CAD) has been transformed by machine learning (ML), providing promising methods for the automated detection and grading of diseases using various retinal imaging modalities. In this paper, we present a comprehensive systematic literature review that discusses the use of ML techniques to detect diseases from retinal images, utilizing both single and multi-modal imaging approaches. We analyze the efficiency of various Deep Learning and classical ML models, highlighting their achievements in accuracy, sensitivity, and specificity. Even with these advancements, the review identifies several critical challenges. We propose future research directions to address these issues. By overcoming these challenges, the potential of ML to enhance diagnostic accuracy and patient outcomes can be fully realized, opening the way for more reliable and effective ocular and non-ocular disease management.

Comparative analysis of convolutional neural networks and vision transformers in identifying benign and malignant breast lesions.

Wang L, Fang S, Chen X, Pan C, Meng M

pubmed logopapersJun 6 2025
Various deep learning models have been developed and employed for medical image classification. This study conducted comprehensive experiments on 12 models, aiming to establish reliable benchmarks for research on breast dynamic contrast-enhanced magnetic resonance imaging image classification. Twelve deep learning models were systematically compared by analyzing variations in 4 key hyperparameters: optimizer (Op), learning rate, batch size (BS), and data augmentation. The evaluation criteria encompassed a comprehensive set of metrics including accuracy (Ac), loss value, precision, recall rate, F1-score, and area under the receiver operating characteristic curve. Furthermore, the training times and model parameter counts were assessed for holistic performance comparison. Adjustments in the BS within Adam Op had a minimal impact on Ac in the convolutional neural network models. However, altering the Op and learning rate while maintaining the same BS significantly affected the Ac. The ResNet152 network model exhibited the lowest Ac. Both the recall rate and area under the receiver operating characteristic curve for the ResNet152 and Vision transformer-base (ViT) models were inferior compared to the others. Data augmentation unexpectedly reduced the Ac of ResNet50, ResNet152, VGG16, VGG19, and ViT models. The VGG16 model boasted the shortest training duration, whereas the ViT model, before data augmentation, had the longest training time and smallest model weight. The ResNet152 and ViT models were not well suited for image classification tasks involving small breast dynamic contrast-enhanced magnetic resonance imaging datasets. Although data augmentation is typically beneficial, its application should be approached cautiously. These findings provide important insights to inform and refine future research in this domain.

Data Driven Models Merging Geometric, Biomechanical, and Clinical Data to Assess the Rupture of Abdominal Aortic Aneurysms.

Alloisio M, Siika A, Roy J, Zerwes S, Hyhlik-Duerr A, Gasser TC

pubmed logopapersJun 6 2025
Despite elective repair of a large portion of stable abdominal aortic aneurysms (AAAs), the diameter criterion cannot prevent all small AAA ruptures. Since rupture depends on many factors, this study explored whether machine learning (ML) models (logistic regression [LogR], linear and non-linear support vector machine [SVM-Lin and SVM-Nlin], and Gaussian Naïve Bayes [GNB]) might improve the diameter based risk assessment by comparing already ruptured (diameter 52.8 - 174.5 mm) with asymptomatic (diameter 40.4 - 95.5 mm) aortas. A retrospective case-control observational study included ruptured AAAs from two centres (2010 - 2012) with computed tomography angiography images for finite element analysis. Clinical patient data and geometric and biomechanical AAA properties were fed into ML models, whose output was compared with the results from intact cases. Classifications were explored for all cases and those having diameters below 70 mm. All data trained and validated the ML models, with a five fold cross-validation. SHapley Additive exPlanations (SHAP) analysis ranked the factors for rupture identification. One hundred and seven ruptured (20% female, mean age 77 years, mean diameter 86.3 mm) and 200 non-ruptured aneurysmal infrarenal aortas (22% female, mean age 74 years, mean diameter 57 mm) were investigated through cross-validation methods. Given the entire dataset, the diameter threshold of 55 mm in men and 50 mm in women provided a 58% accurate rupture classification. It was 99% sensitive (AAA rupture identified correctly) and 36% specific (intact AAAs identified correctly). ML models improved accuracy (LogR 90.2%, SVM-Lin 89.48%, SVM-Nlin 88.7%, and GNB 86.4%); accuracy decreased when trained on the ≤ 70 mm group (55/50 mm diameter threshold 44.2%, LogR 82.5%, SVM-Lin 83.6%, SVM-Nlin 65.9%, and GNB: 84.7%). SHAP ranked biomechanical parameters other than the diameter as the most relevant. A multiparameter estimate enhanced the purely diameter based approach. The proposed predictability method should be further tested in longitudinal studies.

Development of a Deep Learning Model for the Volumetric Assessment of Osteonecrosis of the Femoral Head on Three-Dimensional Magnetic Resonance Imaging.

Uemura K, Takashima K, Otake Y, Li G, Mae H, Okada S, Hamada H, Sugano N

pubmed logopapersJun 6 2025
Although volumetric assessment of necrotic lesions using the Steinberg classification predicts future collapse in osteonecrosis of the femoral head (ONFH), quantifying these lesions using magnetic resonance imaging (MRI) generally requires time and effort, allowing the Steinberg classification to be routinely used in clinical investigations. Thus, this study aimed to use deep learning to develop a method for automatically segmenting necrotic lesions using MRI and for automatically classifying them according to the Steinberg classification. A total of 63 hips from patients who had ONFH and did not have collapse were included. An orthopaedic surgeon manually segmented the femoral head and necrotic lesions on MRI acquired using a spoiled gradient-echo sequence. Based on manual segmentation, 22 hips were classified as Steinberg grade A, 23 as Steinberg grade B, and 18 as Steinberg grade C. The manually segmented labels were used to train a deep learning model that used a 5-layer Dynamic U-Net system. A four-fold cross-validation was performed to assess segmentation accuracy using the Dice coefficient (DC) and average symmetric distance (ASD). Furthermore, hip classification accuracy according to the Steinberg classification was evaluated along with the weighted Kappa coefficient. The median DC and ASD for the femoral head region were 0.95 (interquartile range [IQR], 0.95 to 0.96) and 0.65 mm (IQR, 0.59 to 0.75), respectively. For necrotic lesions, the median DC and ASD were 0.89 (IQR, 0.85 to 0.92) and 0.76 mm (IQR, 0.58 to 0.96), respectively. Based on the Steinberg classification, the grading matched in 59 hips (accuracy: 93.7%), with a weighted Kappa coefficient of 0.98. The proposed deep learning model exhibited high accuracy in segmenting and grading necrotic lesions according to the Steinberg classification using MRI. This model can be used to assist clinicians in the volumetric assessment of ONFH.
Page 205 of 3203198 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.