Sort by:
Page 207 of 6546537 results

Mołek-Dziadosz P, Woźniak A, Furman-Niedziejko A, Pieszko K, Szachowicz-Jaworska J, Miszalski-Jamka T, Krupiński M, Dweck MR, Nessler J, Gackowski A

pubmed logopapersSep 1 2025
 Cardiac magnetic resonance (CMR) is the gold standard for assessing left ventricular ejection fraction (LVEF). Artificial intelligence (AI) - based echocardiographic analysis is increasingly utilized in clinical practice.  This study compares measurements of LVEF between echocardiography (ECHO) assessed by experts and automated AI, in comparison to CMR as the reference standard.  We retrospectively analyzed 118 patients who underwent both CMR and ECHO within 7 days. LVEF measured by CMR was compared with results obtained from an AI-based software which automatically analyzed all stored DICOM loops (Multi loop AI analysis) in echocardiography (ECHO). Additionally, AI results were repeated using only one best quality loop for 2 and one for 4 chamber views (One Loop AI Analysis) in ECHO. These results were further compared with standard ECHO analysis performed by two independent experts. Agreement was investigated using Pearson's correlation and Bland-Altman analysis as well as Cohen's Kappa and concordance for categorization of LVEF into subgroups (≤30%, 31-40%, 41-50%, 51-70%; and >70%).  Both Experts demonstrated strong inter-reader agreement (R = 0.88, κ = 0.77) and correlated well with CMR LVEF (Expert 1: R = 0.86, κ = 0.74; Expert 2: R = 0.85, κ = 0.68). Multi loop AI analysis correlated strongly with CMR (R = 0.87, κ = 0.68) and Experts (R = 0.88-0.90). One Loop AI Analysis demonstrated numerically higher concordance with CMR LVEF (R = 0.89, κ = 0.75) compared to Multi loop AI analysis and Experts.  AI-based analysis showed similar LVEF assessment as human experts in comparison to CMR results. AI-based ECHO analysis are promising, but the obtained results should be interpreted with caution.

Yoo TW, Yeo CD, Lee EJ, Oh IS

pubmed logopapersSep 1 2025
The identification of endolymphatic hydrops (EH) using magnetic resonance imaging (MRI) is crucial for understanding inner ear disorders such as Meniere's disease and sudden low-frequency hearing loss. The EH ratio is calculated as the ratio of the endolymphatic fluid space to the perilymphatic fluid space. We propose a novel cross-channel feature transfer (CCFT) 3D U-Net for fully automated segmentation of the perilymphatic and endolymphatic fluid spaces in hydrops MRI. The model exhibits state-of-the-art performance in segmenting the endolymphatic fluid space by transferring magnetic resonance cisternography (MRC) features to HYDROPS-Mi2 (HYbriD of Reversed image Of Positive endolymph signal and native image of positive perilymph Signal multiplied with the heavily T2-weighted MR cisternography). Experimental results using the CCFT module showed that the segmentation performance of the perilymphatic space was 0.9459 for the Dice similarity coefficient (DSC) and 0.8975 for the intersection over union (IOU), and that of the endolymphatic space was 0.8053 for the DSC and 0.6778 for the IOU.

Waqas M, Hasan S, Ghori AF, Alfaraj A, Faheemuddin M, Khurshid Z

pubmed logopapersSep 1 2025
To overcome the scarcity of annotated dental X-ray datasets, this study presents a novel pipeline for generating high-resolution synthetic orthopantomography (OPG) images using customized generative adversarial networks (GANs). A total of 4777 real OPG images were collected from clinical centres in Pakistan, Thailand, and the U.S., covering diverse anatomical features. Twelve GAN models were initially trained, with four top-performing variants selected for further training on both combined and region-specific datasets. Synthetic images were generated at 2048 × 1024 pixels, maintaining fine anatomical detail. The evaluation was conducted using (1) a YOLO-based object detection model trained on real OPGs to assess feature representation via mean average precision, and (2) expert dentist scoring for anatomical and diagnostic realism. All selected models produced realistic synthetic OPGs. The YOLO detector achieved strong performance on these images, indicating accurate structural representation. Expert evaluations confirmed high anatomical plausibility, with models M1 and M3 achieving over 50% of the reference scores assigned to real OPGs. The developed GAN-based pipeline enables the ethical and scalable creation of synthetic OPG images, suitable for augmenting datasets used in artificial intelligence-driven dental diagnostics. This method provides a practical solution to data limitations in dental artificial intelligence, supporting model development in privacy-sensitive or low-resource environments.

Yang J, Luo Z, Wen Y, Zhang J

pubmed logopapersSep 1 2025
Thyroid nodules are a common clinical concern, with accurate diagnosis being critical for effective treatment and improved patient outcomes. Traditional ultrasound examinations rely heavily on the physician's experience, which can lead to diagnostic variability. The integration of artificial intelligence (AI) into medical imaging offers a promising solution for enhancing diagnostic accuracy and efficiency. This study aimed to evaluate the effectiveness of the You Only Look Once v. 11 (YOLOv11) model in detecting and classifying thyroid nodules through ultrasound images, with the goal of supporting real-time clinical decision-making and improving diagnostic workflows. We used the YOLOv11 model to analyze a dataset of 1,503 thyroid ultrasound images, divided into training (1,203 images), validation (150 images), and test (150 images) sets, comprising 742 benign and 778 malignant nodules. Advanced data augmentation and transfer learning techniques were applied to optimize model performance. Comparative analysis was conducted with other YOLO variants (YOLOv3 to YOLOv10) and residual network 50 (ResNet50) to assess their diagnostic capabilities. The YOLOv11 model exhibited superior performance in thyroid nodule detection as compared to other YOLO variants (from YOLOv3 to YOLOv10) and ResNet50. At an intersection over union (IoU) of 0.5, YOLOv11 achieved a precision (P) of 0.841 and recall (R) of 0.823, outperforming ResNet50's P of 0.8333 and R of 0.8025. Among the YOLO variants, YOLOv11 consistently achieved the highest P and R values. For benign nodules, YOLOv11 obtained a P of 0.835 and R of 0.833, while for malignant nodules, it reached a P of 0.846 and a R of 0.813. Within the YOLOv11 model itself, performance varied across different IoU thresholds (0.25, 0.5, 0.7, and 0.9). Lower IoU thresholds generally resulted in better performance metrics, with P and R values decreasing as the IoU threshold increased. YOLOv11 proved to be a powerful tool for thyroid nodule detection and malignancy classification, offering high P and real-time performance. These attributes are vital for dynamic ultrasound examinations and enhancing diagnostic efficiency. Future research will focus on expanding datasets and validating the model's clinical utility in real-time settings.

Li W, Zhu Y, Zhao G, Chen X, Zhao X, Xu H, Che Y, Chen Y, Ye Y, Dou X, Wang H, Cheng J, Xie Q, Chen K

pubmed logopapersSep 1 2025
Accurate staging of hepatic fibrosis is critical for prognostication and management among patients with chronic liver disease, and noninvasive, efficient alternatives to biopsy are urgently needed. This study aimed to evaluate the performance of an automated deep learning (DL) algorithm for fibrosis staging and for differentiating patients with hepatic fibrosis from healthy individuals via magnetic resonance (MR) images with and without additional clinical data. A total of 500 patients from two medical centers were retrospectively analyzed. DL models were developed based on delayed-phase MR images to predict fibrosis stages. Additional models were constructed by integrating the DL algorithm with nonimaging variables, including serologic biomarkers [aminotransferase-to-platelet ratio index (APRI) and fibrosis index based on four factors (FIB-4)], viral status (hepatitis B and C), and MR scanner parameters. Diagnostic performance, was assessed via the area under the receiver operating characteristic curve (AUROC), and comparisons were through use of the DeLong test. Sensitivity and specificity of the DL and full models (DL plus all clinical features) were compared with those of experienced radiologists and serologic biomarkers via the McNemar test. In the test set, the full model achieved AUROC values of 0.99 [95% confidence interval (CI): 0.94-1.00], 0.98 (95% CI: 0.93-0.99), 0.90 (95% CI: 0.83-0.95), 0.81 (95% CI: 0.73-0.88), and 0.84 (95% CI: 0.76-0.90) for staging F0-4, F1-4, F2-4, F3-4, and F4, respectively. This model significantly outperformed the DL model in early-stage classification (F0-4 and F1-4). Compared with expert radiologists, it showed superior specificity for F0-4 and higher sensitivity across the other four classification tasks. Both the DL and full models showed significantly greater specificity than did the biomarkers for staging advanced fibrosis (F3-4 and F4). The proposed DL algorithm provides a noninvasive method for hepatic fibrosis staging and screening, outperforming both radiologists and conventional biomarkers, and may facilitate improved clinical decision-making.

Zhu W, Wang X, Xing J, Xu XS, Yuan M

pubmed logopapersSep 1 2025
Lung cancer remains one of the malignant tumors with the highest global morbidity and mortality rates. Detecting pulmonary nodules in computed tomography (CT) images is essential for early lung cancer screening. However, traditional detection methods often suffer from low accuracy and efficiency, limiting their clinical effectiveness. This study aims to devise an advanced deep-learning framework capable of achieving high-precision, rapid identification of pulmonary nodules in CT imaging, thereby facilitating earlier and more accurate diagnosis of lung cancer. To address these issues, this paper proposes an improved deep-learning framework named YOLOv8-BCD, based on YOLOv8 and integrating the BiFormer attention mechanism, Content-Aware ReAssembly of Features (CARAFE) up-sampling method, and Depth-wise Over-Parameterized Depth-wise Convolution (DO-DConv) enhanced convolution. To overcome common challenges such as low resolution, noise, and artifacts in lung CT images, the model employs Super-Resolution Generative Adversarial Network (SRGAN)-based image enhancement during preprocessing. The BiFormer attention mechanism is introduced into the backbone to enhance feature extraction capabilities, particularly for small nodules, while CARAFE and DO-DConv modules are incorporated into the head to optimize feature fusion efficiency and reduce computational complexity. Experimental comparisons using 550 CT images from the LUng Nodule Analysis 2016 dataset (LUNA16 dataset) demonstrated that the proposed YOLOv8-BCD achieved detection accuracy and mean average precision (mAP) at an intersection over union (IoU) threshold of 0.5 (mAP<sub>0.5</sub>) of 86.4% and 88.3%, respectively, surpassing YOLOv8 by 2.2% in accuracy, 4.5% in mAP<sub>0.5</sub>. Additional evaluation on the external TianChi lung nodule dataset further confirmed the model's generalization capability, achieving an mAP<sub>0.5</sub> of 83.8% and mAP<sub>0.5-0.95</sub> of 43.9% with an inference speed of 98 frames per second (FPS). The YOLOv8-BCD model effectively assists clinicians by significantly reducing interpretation time, improving diagnostic accuracy, and minimizing the risk of missed diagnoses, thereby enhancing patient outcomes.

Feng X, Zhang Y, Lu M, Ma C, Miao X, Yang J, Lin L, Zhang Y, Zhang K, Zhang N, Kang Y, Luo Y, Cao K

pubmed logopapersSep 1 2025
Currently, there is no fully automated tool available for evaluating the degree of cervical spinal stenosis. The aim of this study was to develop and validate the use of artificial intelligence (AI) algorithms for the assessment of cervical spinal stenosis. In this retrospective multi-center study, cervical spine magnetic resonance imaging (MRI) scans obtained from July 2020 to June 2023 were included. Studies of patients with spinal instrumentation or studies with suboptimal image quality were excluded. Sagittal T2-weighted images were used. The training data from the Fourth People's Hospital of Shanghai (Hos. 1) and Shanghai Changzheng Hospital (Hos. 2) were annotated by two musculoskeletal (MSK) radiologists following Kang's system as the standard reference. First, a convolutional neural network (CNN) was trained to detect the region of interest (ROI), with a second Transformer for classification. The performance of the deep learning (DL) model was assessed on an internal test set from Hos. 2 and an external test set from Shanghai Changhai Hospital (Hos. 3), and compared among six readers. Metrics such as detection precision, interrater agreement, sensitivity (SEN), and specificity (SPE) were calculated. Overall, 795 patients were analyzed (mean age ± standard deviation, 55±14 years; 346 female), with 589 in the training (75%) and validation (25%) sets, 206 in the internal test set, and 95 in the external test set. Four tasks with different clinical application scenarios were trained, and their accuracy (ACC) ranged from 0.8993 to 0.9532. When using a Kang system score of ≥2 as a threshold for diagnosing central cervical canal stenosis in the internal test set, both the algorithm and six readers achieved similar areas under the receiver operating characteristic curve (AUCs) of 0.936 [95% confidence interval (CI): 0.916-0.955], with a SEN of 90.3% and SPE of 93.8%; the AUC of the DL model was 0.931 (95% CI: 0.917-0.946), with a SEN in the external test set of 100%, and a SPE of 86.3%. Correlation analysis comparing the DL method, the six readers, and MRI reports between the reference standard showed a moderate correlation, with R values ranging from 0.589 to 0.668. The DL model produced approximately the same upgrades (9.2%) and downgrades (5.1%) as the six readers. The DL model could fully automatically and reliably assess cervical canal stenosis using MRI scans.

Chen S, Zhong Z, Chen Y, Tang W, Fan Y, Sui Y, Hu W, Pan L, Liu S, Kong Q, Guo Y, Liu W

pubmed logopapersSep 1 2025
The use of multiparametric magnetic resonance imaging (MRI) in predicting lymphovascular invasion (LVI) in breast cancer has been well-documented in the literature. However, the majority of the related studies have primarily focused on intratumoral characteristics, overlooking the potential contribution of peritumoral features. The aim of this study was to evaluate the effectiveness of multiparametric MRI in predicting LVI by analyzing both intratumoral and peritumoral radiomics features and to assess the added value of incorporating both regions in LVI prediction. A total of 366 patients underwent preoperative breast MRI from two centers and were divided into training (n=208), validation (n=70), and test (n=88) sets. Imaging features were extracted from intratumoral and peritumoral T2-weighted imaging, diffusion-weighted imaging, and dynamic contrast-enhanced MRI. Five models were developed for predicting LVI status based on logistic regression: the tumor area (TA) model, peritumoral area (PA) model, tumor-plus-peritumoral area (TPA) model, clinical model, and combined model. The combined model was created incorporating the highest radiomics score and clinical factors. Predictive efficacy was evaluated via the receiver operating characteristic (ROC) curve and area under the curve (AUC). The Shapley additive explanation (SHAP) method was used to rank the features and explain the final model. The performance of the TPA model was superior to that of the TA and PA models. A combined model was further developed via multivariable logistic regression, with the TPA radiomics score (radscore), MRI-assessed axillary lymph node (ALN) status, and peritumoral edema (PE) being incorporated. The combined model demonstrated good calibration and discrimination performance across the training, validation, and test datasets, with AUCs of 0.888 [95% confidence interval (CI): 0.841-0.934], 0.856 (95% CI: 0.769-0.943), and 0.853 (95% CI: 0.760-0.946), respectively. Furthermore, we conducted SHAP analysis to evaluate the contributions of TPA radscore, MRI-ALN status, and PE in LVI status prediction. The combined model, incorporating clinical factors and intratumoral and peritumoral radscore, effectively predicts LVI and may potentially aid in tailored treatment planning.

Keatmanee C, Songsaeng D, Klabwong S, Nakaguro Y, Kunapinun A, Ekpanyapong M, Dailey MN

pubmed logopapersSep 1 2025
The accurate assessment of thyroid nodules, which are increasingly common with age and lifestyle factors, is essential for early malignancy detection. Ultrasound imaging, the primary diagnostic tool for this purpose, holds promise when paired with deep learning. However, challenges persist with small datasets, where conventional data augmentation can introduce noise and obscure essential diagnostic features. To address dataset imbalance and enhance model generalization, this study integrates curriculum learning with a weakly supervised attention network to improve diagnostic accuracy for thyroid nodule classification. This study integrates curriculum learning with attention-guided data augmentation to improve deep learning model performance in classifying thyroid nodules. Using verified datasets from Siriraj Hospital, the model was trained progressively, beginning with simpler images and gradually incorporating more complex cases. This structured learning approach is designed to enhance the model's diagnostic accuracy by refining its ability to distinguish benign from malignant nodules. Among the curriculum learning schemes tested, schematic IV achieved the best results, with a precision of 100% for benign and 70% for malignant nodules, a recall of 82% for benign and 100% for malignant, and F1-scores of 90% and 83%, respectively. This structured approach improved the model's diagnostic sensitivity and robustness. These findings suggest that automated thyroid nodule assessment, supported by curriculum learning, has the potential to complement radiologists in clinical practice, enhancing diagnostic accuracy and aiding in more reliable malignancy detection.

Hou C, Zhang M, Jiang X, Li H

pubmed logopapersSep 1 2025
People living with human immunodeficiency virus (PLWH) are at risk of human immunodeficiency virus (HIV)-associated neurocognitive disorders (HAND). The mildest disease stage of HAND is asymptomatic neurocognitive impairment (ANI), and the accurate diagnosis of this stage can facilitate timely clinical interventions. The aim of this study was to mine features related to the diagnosis of ANI based on resting-state functional magnetic resonance imaging (rs-fMRI) and to establish classification models. A total of 74 patients with 74 ANI and 78 with PLWH but no neurocognitive disorders (PWND) were enrolled. Basic clinical, T1-weighted imaging, and rs-fMRI data were obtained. The rs-fMRI signal values and radiomics features of 116 brain regions designated by the Anatomical Automatic Labeling template were collected, and the features were selected via the least absolute shrinkage and selection operator. rs-fMRI, radiomics, and combined models were constructed with five machine learning classifiers, respectively. Model performance was evaluated via the mean area under the curve (AUC), accuracy, sensitivity, and specificity. Twenty-one rs-fMRI signal values and 28 radiomics features were selected to construct models. The performance of the combined models was exceptional, with the standout random forest (RF) model delivering an AUC value of 0.902 [95% confidence interval (CI): 0.813-0.990] in the validation set and 1.000 (95% CI: 1.000-1.000) in the training set. Further analysis of the 49 features revealed significantly overlapping brain regions for both feature types. Three key features demonstrating significant differences between ANI and PWND were identified (all P values <0.001). These features correlated with cognitive test performance (r>0.3). The RF combined model exhibited high classification performance in ANI, enabling objective and reliable individual diagnosis in clinical practice. It thus represents a novel method for characterizing the brain functional impairment and pathophysiology of patients with ANI. Greater attention should be paid to the frontoparietal and striatum in the research and clinical work related to ANI.
Page 207 of 6546537 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.