Sort by:
Page 29 of 1421417 results

Comparative analysis of convolutional neural networks and vision transformers in identifying benign and malignant breast lesions.

Wang L, Fang S, Chen X, Pan C, Meng M

pubmed logopapersJun 6 2025
Various deep learning models have been developed and employed for medical image classification. This study conducted comprehensive experiments on 12 models, aiming to establish reliable benchmarks for research on breast dynamic contrast-enhanced magnetic resonance imaging image classification. Twelve deep learning models were systematically compared by analyzing variations in 4 key hyperparameters: optimizer (Op), learning rate, batch size (BS), and data augmentation. The evaluation criteria encompassed a comprehensive set of metrics including accuracy (Ac), loss value, precision, recall rate, F1-score, and area under the receiver operating characteristic curve. Furthermore, the training times and model parameter counts were assessed for holistic performance comparison. Adjustments in the BS within Adam Op had a minimal impact on Ac in the convolutional neural network models. However, altering the Op and learning rate while maintaining the same BS significantly affected the Ac. The ResNet152 network model exhibited the lowest Ac. Both the recall rate and area under the receiver operating characteristic curve for the ResNet152 and Vision transformer-base (ViT) models were inferior compared to the others. Data augmentation unexpectedly reduced the Ac of ResNet50, ResNet152, VGG16, VGG19, and ViT models. The VGG16 model boasted the shortest training duration, whereas the ViT model, before data augmentation, had the longest training time and smallest model weight. The ResNet152 and ViT models were not well suited for image classification tasks involving small breast dynamic contrast-enhanced magnetic resonance imaging datasets. Although data augmentation is typically beneficial, its application should be approached cautiously. These findings provide important insights to inform and refine future research in this domain.

Data Driven Models Merging Geometric, Biomechanical, and Clinical Data to Assess the Rupture of Abdominal Aortic Aneurysms.

Alloisio M, Siika A, Roy J, Zerwes S, Hyhlik-Duerr A, Gasser TC

pubmed logopapersJun 6 2025
Despite elective repair of a large portion of stable abdominal aortic aneurysms (AAAs), the diameter criterion cannot prevent all small AAA ruptures. Since rupture depends on many factors, this study explored whether machine learning (ML) models (logistic regression [LogR], linear and non-linear support vector machine [SVM-Lin and SVM-Nlin], and Gaussian Naïve Bayes [GNB]) might improve the diameter based risk assessment by comparing already ruptured (diameter 52.8 - 174.5 mm) with asymptomatic (diameter 40.4 - 95.5 mm) aortas. A retrospective case-control observational study included ruptured AAAs from two centres (2010 - 2012) with computed tomography angiography images for finite element analysis. Clinical patient data and geometric and biomechanical AAA properties were fed into ML models, whose output was compared with the results from intact cases. Classifications were explored for all cases and those having diameters below 70 mm. All data trained and validated the ML models, with a five fold cross-validation. SHapley Additive exPlanations (SHAP) analysis ranked the factors for rupture identification. One hundred and seven ruptured (20% female, mean age 77 years, mean diameter 86.3 mm) and 200 non-ruptured aneurysmal infrarenal aortas (22% female, mean age 74 years, mean diameter 57 mm) were investigated through cross-validation methods. Given the entire dataset, the diameter threshold of 55 mm in men and 50 mm in women provided a 58% accurate rupture classification. It was 99% sensitive (AAA rupture identified correctly) and 36% specific (intact AAAs identified correctly). ML models improved accuracy (LogR 90.2%, SVM-Lin 89.48%, SVM-Nlin 88.7%, and GNB 86.4%); accuracy decreased when trained on the ≤ 70 mm group (55/50 mm diameter threshold 44.2%, LogR 82.5%, SVM-Lin 83.6%, SVM-Nlin 65.9%, and GNB: 84.7%). SHAP ranked biomechanical parameters other than the diameter as the most relevant. A multiparameter estimate enhanced the purely diameter based approach. The proposed predictability method should be further tested in longitudinal studies.

Development of a Deep Learning Model for the Volumetric Assessment of Osteonecrosis of the Femoral Head on Three-Dimensional Magnetic Resonance Imaging.

Uemura K, Takashima K, Otake Y, Li G, Mae H, Okada S, Hamada H, Sugano N

pubmed logopapersJun 6 2025
Although volumetric assessment of necrotic lesions using the Steinberg classification predicts future collapse in osteonecrosis of the femoral head (ONFH), quantifying these lesions using magnetic resonance imaging (MRI) generally requires time and effort, allowing the Steinberg classification to be routinely used in clinical investigations. Thus, this study aimed to use deep learning to develop a method for automatically segmenting necrotic lesions using MRI and for automatically classifying them according to the Steinberg classification. A total of 63 hips from patients who had ONFH and did not have collapse were included. An orthopaedic surgeon manually segmented the femoral head and necrotic lesions on MRI acquired using a spoiled gradient-echo sequence. Based on manual segmentation, 22 hips were classified as Steinberg grade A, 23 as Steinberg grade B, and 18 as Steinberg grade C. The manually segmented labels were used to train a deep learning model that used a 5-layer Dynamic U-Net system. A four-fold cross-validation was performed to assess segmentation accuracy using the Dice coefficient (DC) and average symmetric distance (ASD). Furthermore, hip classification accuracy according to the Steinberg classification was evaluated along with the weighted Kappa coefficient. The median DC and ASD for the femoral head region were 0.95 (interquartile range [IQR], 0.95 to 0.96) and 0.65 mm (IQR, 0.59 to 0.75), respectively. For necrotic lesions, the median DC and ASD were 0.89 (IQR, 0.85 to 0.92) and 0.76 mm (IQR, 0.58 to 0.96), respectively. Based on the Steinberg classification, the grading matched in 59 hips (accuracy: 93.7%), with a weighted Kappa coefficient of 0.98. The proposed deep learning model exhibited high accuracy in segmenting and grading necrotic lesions according to the Steinberg classification using MRI. This model can be used to assist clinicians in the volumetric assessment of ONFH.

[Albumin-myoestatosis gauge assisted by an artificial intelligence tool as a prognostic factor in patients with metastatic colorectal-cancer].

de Luis Román D, Primo D, Izaola Jáuregui O, Sánchez Lite I, López Gómez JJ

pubmed logopapersJun 6 2025
to evaluate the prognostic role of the marker albumin-myosteatosis (MAM) in Caucasian patients with metastatic colorectal cancer. this study involved 55 consecutive Caucasian patients diagnosed with metastatic colorectal cancer. CT scans at the L3 vertebral level were analyzed to determine skeletal muscle cross-sectional area, skeletal muscle index (SMI), and skeletal muscle density (SMD). Bioelectrical impedance analysis (BIA) (phase angle, reactance, resistance, and SMI-BIA) was used. Albumin and prealbumin were measured. The albumin-myosteatosis marker (AMM = serum albumin (g/dL) × skeletal muscle density (SMD) in Hounsfield units (HU) was calculated. Survival was estimated using the Kaplan-Meier method and comparisons between groups were performed using the log-rank test. the median age was 68.1 ± 9.1 years. Patients were divided into two groups based on the median MAM (129.1 AU for women and 156.3 AU for men). Patients in the low MAM group had significantly reduced values of phase angle and reactance, as well as older age. These patients also had higher rates of malnutrition by GLIM criteria (odds ratio: 3.8; 95 % CI = 1.2-12.9), low muscle mass diagnosed with TC (odds ratio: 3.6; 95 % CI = 1.2-10.9) and mortality (odds ratio: 9.82; 95 % CI = 1.2-10.9). The Kaplan-Meir analysis demonstrated significant differences in 5-year survival between MAM groups (patients in the low median MAM group vs. patients in the high median MAM group), (HR: 6.2; 95 % CI = 1.10-37.5). the marker albumin-myosteatosis (MAM) may function as a prognostic marker of survival in Caucasian patients with metastatic CRC.

Research on ischemic stroke risk assessment based on CTA radiomics and machine learning.

Li ZL, Yang HY, Lv XX, Zhang YK, Zhu XY, Zhang YR, Guo L

pubmed logopapersJun 5 2025
The study explores the value of a model constructed by integrating CTA-based carotid plaque radiomic features, clinical risk factors, and plaque imaging characteristics for prognosticating the risk of ischemic stroke. Data from 123 patients with carotid atherosclerosis were analyzed and divided into stroke and asymptomatic groups based on DWI findings. Clinical information was collected, and plaque imaging characteristics were assessed to construct a traditional model. Radiomic features of carotid plaques were extracted using 3D-Slicer software to build a radiomics model. Logistic regression was applied in the training set to establish the traditional model, the radiomics model, and a combined model, which were then tested in the validation set. The prognostic ability of the three models for ischemic stroke was evaluated using ROC curves, while calibration curves, decision curve analysis, and clinical impact curves were used to assess the clinical utility of the models. Differences in AUC values between models were compared using the DeLong test. Hypertension, diabetes, elevated homocysteine (Hcy) concentrations, and plaque burden are independent risk factors for ischemic stroke and were used to establish the traditional model. Through Lasso regression, nine optimal features were selected to construct the radiomics model. ROC curve analysis showed that the AUC values of the three Logistic regression models were 0.766, 0.766, and 0.878 in the training set, and 0.798, 0.801, and 0.847 in the validation set. Calibration curves and decision curve analysis showed that the radiomics model and the combined model had higher accuracy and better fit in prognosticating the risk of ischemic stroke. The radiomics model is slightly better than the traditional model in evaluating the risk of ischemic stroke, while the combined model has the best prognostic performance.

Quantitative and automatic plan-of-the-day assessment to facilitate adaptive radiotherapy in cervical cancer.

Mason SA, Wang L, Alexander SE, Lalondrelle S, McNair HA, Harris EJ

pubmed logopapersJun 5 2025
To facilitate implementation of plan-of-the-day (POTD) selection for treating locally advanced cervical cancer (LACC), we developed a POTD assessment tool for CBCT-guided radiotherapy (RT). A female pelvis segmentation model (U-Seg3) is combined with a quantitative standard operating procedure (qSOP) to identify optimal and acceptable plans. 

Approach: The planning CT[i], corresponding structure set[ii], and manually contoured CBCTs[iii] (n=226) from 39 LACC patients treated with POTD (n=11) or non-adaptive RT (n=28) were used to develop U-Seg3, an algorithm incorporating deep-learning and deformable image registration techniques to segment the low-risk clinical target volume (LR-CTV), high-risk CTV (HR-CTV), bladder, rectum, and bowel bag. A single-channel input model (iii only, U-Seg1) was also developed. Contoured CBCTs from the POTD patients were (a) reserved for U-Seg3 validation/testing, (b) audited to determine optimal and acceptable plans, and (c) used to empirically derive a qSOP that maximised classification accuracy. 

Main Results: The median [interquartile range] DSC between manual and U-Seg3 contours was 0.83 [0.80], 0.78 [0.13], 0.94 [0.05], 0.86[0.09], and 0.90 [0.05] for the LR-CTV, HR-CTV, bladder, rectum, and bowel bag. These were significantly higher than U-Seg1 in all structures but bladder. The qSOP classified plans as acceptable if they met target coverage thresholds (LR-CTV≧99%, HR-CTV≧99.8%), with lower LR-CTV coverage (≧95%) sometimes allowed. The acceptable plan minimising bowel irradiation was considered optimal unless substantial bladder sparing could be achieved. With U-Seg3 embedded in the qSOP, optimal and acceptable plans were identified in 46/60 and 57/60 cases. 

Significance: U-Seg3 outperforms U-Seg1 and all known CBCT-based female pelvis segmentation models. The tool combining U-Seg3 and the qSOP identifies optimal plans with equivalent accuracy as two observers. In an implementation strategy whereby this tool serves as the second observer, plan selection confidence and decision-making time could be improved whilst simultaneously reducing the required number of POTD-trained radiographers by 50%.

&#xD.

High-definition motion-resolved MRI using 3D radial kooshball acquisition and deep learning spatial-temporal 4D reconstruction.

Murray V, Wu C, Otazo R

pubmed logopapersJun 5 2025
&#xD;To develop motion-resolved volumetric MRI with 1.1mm isotropic resolution and scan times <5 minutes using a combination of 3D radial kooshball acquisition and spatial-temporal deep learning 4D reconstruction for free-breathing high-definition lung MRI. &#xD;Approach: &#xD;Free-breathing lung MRI was conducted on eight healthy volunteers and ten patients with lung tumors on a 3T MRI scanner using a 3D radial kooshball sequence with half-spoke (ultrashort echo time, UTE, TE=0.12ms) and full-spoke (T1-weighted, TE=1.55ms) acquisitions. Data were motion-sorted using amplitude-binning on a respiratory motion signal. Two high-definition Movienet (HD-Movienet) deep learning models were proposed to reconstruct 3D radial kooshball data: slice-by-slice reconstruction in the coronal orientation using 2D convolutional kernels (2D-based HD-Movienet) and reconstruction on blocks of eight coronal slices using 3D convolutional kernels (3D-based HD-Movienet). Two applications were considered: (a) anatomical imaging at expiration and inspiration with four motion states and a scan time of 2 minutes, and (b) dynamic motion imaging with 10 motion states and a scan time of 4 minutes. The training was performed using XD-GRASP 4D images reconstructed from 4.5-minute and 6.5-minute acquisitions as references. &#xD;Main Results: &#xD;2D-based HD-Movienet achieved a reconstruction time of <6 seconds, significantly faster than the iterative XD-GRASP reconstruction (>10 minutes with GPU optimization) while maintaining comparable image quality to XD-GRASP with two extra minutes of scan time. The 3D-based HD-Movienet improved reconstruction quality at the expense of longer reconstruction times (<11 seconds). &#xD;Significance: &#xD;HD-Movienet demonstrates the feasibility of motion-resolved 4D MRI with isotropic 1.1mm resolution and scan times of only 2 minutes for four motion states and 4 minutes for 10 motion states, marking a significant advancement in clinical free-breathing lung MRI.

StrokeNeXt: an automated stroke classification model using computed tomography and magnetic resonance images.

Ekingen E, Yildirim F, Bayar O, Akbal E, Sercek I, Hafeez-Baig A, Dogan S, Tuncer T

pubmed logopapersJun 5 2025
Stroke ranks among the leading causes of disability and death worldwide. Timely detection can reduce its impact. Machine learning delivers powerful tools for image‑based diagnosis. This study introduces StrokeNeXt, a lightweight convolutional neural network (CNN) for computed tomography (CT) and magnetic resonance (MR) scans, and couples it with deep feature engineering (DFE) to improve accuracy and facilitate clinical deployment. We assembled a multimodal dataset of CT and MR images, each labeled as stroke or control. StrokeNeXt employs a ConvNeXt‑inspired block and a squeeze‑and‑excitation (SE) unit across four stages: stem, StrokeNeXt block, downsampling, and output. In the DFE pipeline, StrokeNeXt extracts features from fixed‑size patches, iterative neighborhood component analysis (INCA) selects the top features, and a t algorithm-based k-nearest neighbors (tkNN) classifier has been utilized for classification. StrokeNeXt achieved 93.67% test accuracy on the assembled dataset. Integrating DFE raised accuracy to 97.06%. This combined approach outperformed StrokeNeXt alone and reduced classification time. StrokeNeXt paired with DFE offers an effective solution for stroke detection on CT and MR images. Its high accuracy and fewer learnable parameters make it lightweight and it is suitable for integration into clinical workflows. This research lays a foundation for real‑time decision support in emergency and radiology settings.

Association between age and lung cancer risk: evidence from lung lobar radiomics.

Li Y, Lin C, Cui L, Huang C, Shi L, Huang S, Yu Y, Zhou X, Zhou Q, Chen K, Shi L

pubmed logopapersJun 5 2025
Previous studies have highlighted the prominent role of age in lung cancer risk, with signs of lung aging visible in computed tomography (CT) imaging. This study aims to characterize lung aging using quantitative radiomic features extracted from five delineated lung lobes and explore how age contributes to lung cancer development through these features. We analyzed baseline CT scans from the Wenling lung cancer screening cohort, consisting of 29,810 participants. Deep learning-based segmentation method was used to delineate lung lobes. A total of 1,470 features were extracted from each lobe. The minimum redundancy maximum relevance algorithm was applied to identify the top 10 age-related radiomic features among 13,137 never smokers. Multiple regression analyses were used to adjust for confounders in the association of age, lung lobar radiomic features, and lung cancer. Linear, Cox proportional hazards, and parametric accelerated failure time models were applied as appropriate. Mediation analyses were conducted to evaluate whether lobar radiomic features mediate the relationship between age and lung cancer risk. Age was significantly associated with an increased lung cancer risk, particularly among current smokers (hazard ratio = 1.07, P = 2.81 × 10<sup>- 13</sup>). Age-related radiomic features exhibited distinct effects across lung lobes. Specifically, the first order mean (mean attenuation value) filtered by wavelet in the right upper lobe increased with age (β = 0.019, P = 2.41 × 10<sup>- 276</sup>), whereas it decreased in the right lower lobe (β = -0.028, P = 7.83 × 10<sup>- 277</sup>). Three features, namely wavelet_HL_firstorder_Mean of the right upper lobe, wavelet_LH_firstorder_Mean of the right lower lobe, and original_shape_MinorAxisLength of the left upper lobe, were independently associated with lung cancer risk at Bonferroni-adjusted P value. Mediation analyses revealed that density and shape features partially mediated the relationship between age and lung cancer risk while a suppression effect was observed in the wavelet first order mean of right upper lobe. The study reveals lobe-specific heterogeneity in lung aging patterns through radiomics and their associations with lung cancer risk. These findings may contribute to identify new approaches for early intervention in lung cancer related to aging. Not applicable.

Artificial intelligence-based detection of dens invaginatus in panoramic radiographs.

Sarı AH, Sarı H, Magat G

pubmed logopapersJun 5 2025
The aim of this study was to automatically detect teeth with dens invaginatus (DI) in panoramic radiographs using deep learning algorithms and to compare the success of the algorithms. For this purpose, 400 panoramic radiographs with DI were collected from the faculty database and separated into 60% training, 20% validation and 20% test images. The training and validation images were labeled by oral, dental and maxillofacial radiologists and augmented with various augmentation methods, and the improved models were asked for the images allocated for the test phase and the results were evaluated according to performance measures including accuracy, sensitivity, F1 score and mean detection time. According to the test results, YOLOv8 achieved a precision, sensitivity and F1 score of 0.904 and was the fastest detection model with an average detection time of 0.041. The Faster R-CNN model achieved 0.912 precision, 0.904 sensitivity and 0.907 F1 score, with an average detection time of 0.1 s. The YOLOv9 algorithm showed the most successful performance with 0.946 precision, 0.930 sensitivity, 0.937 F1 score value and the average detection speed per image was 0.158 s. According to the results obtained, all models achieved over 90% success. YOLOv8 was relatively more successful in detection speed and YOLOv9 in other performance criteria. Faster R-CNN ranked second in all criteria.
Page 29 of 1421417 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.