Sort by:
Page 502 of 7527514 results

Fu Y, Chen J, Chen Y, Lin Z, Ye L, Ye D, Gao F, Zhang C, Huang P

pubmed logopapersJul 2 2025
To develop a dynamic contrast-enhanced ultrasound (CEUS)-based method for segmenting tumor perfusion subregions, quantifying tumor heterogeneity, and constructing models for distinguishing benign from malignant breast tumors. This retrospective-prospective cohort study analyzed CEUS videos of patients with breast tumors from four academic medical centers between September 2015 and October 2024. Pixel-based time-intensity curve (TIC) perfusion variables were extracted, followed by the generation of perfusion heterogeneity maps through cluster analysis. A combined diagnostic model incorporating clinical variables, subregion percentages, and radiomics scores was developed, and subsequently, a nomogram based on this model was constructed for clinical application. A total of 339 participants were included in this bidirectional study. Retrospective data included 233 tumors divided into training and test sets. The prospective data comprised 106 tumors as an independent test set. Subregion analysis revealed Subregion 2 dominated benign tumors, while Subregion 3 was prevalent in malignant tumors. Among 59 machine-learning models, Elastic Net (ENET) (α = 0.7) performed best. Age and subregion radiomics scores were independent risk factors. The combined model achieved area under the curve (AUC) values of 0.93, 0.82, and 0.90 in the training, retrospective, and prospective test sets, respectively. The proposed CEUS-based method enhances visualization and quantification of tumor perfusion dynamics, significantly improving the diagnostic accuracy for breast tumors.

Geng Z, Li K, Mei P, Gong Z, Yan R, Huang Y, Zhang C, Zhao B, Lu M, Yang R, Wu G, Ye G, Liao Y

pubmed logopapersJul 2 2025
This study aimed to develop a pretreatment CT-based multichannel predictor integrating deep learning features encoded by Transformer models for preoperative diagnosis of major pathological response (MPR) in non-small cell lung cancer (NSCLC) patients receiving neoadjuvant immunochemotherapy. This multicenter diagnostic study retrospectively included 332 NSCLC patients from four centers. Pretreatment computed tomography images were preprocessed and segmented into region of interest cubes for radiomics modeling. These cubes were cropped into four groups of 2 dimensional image modules. GoogLeNet architecture was trained independently on each group within a multichannel framework, with gradient-weighted class activation mapping and SHapley Additive exPlanations value‌ for visualization. Deep learning features were carefully extracted and fused across the four image groups using the Transformer fusion model. After models training, model performance was evaluated via the area under the curve (AUC), sensitivity, specificity, F1 score, confusion matrices, calibration curves, decision curve analysis, integrated discrimination improvement, net reclassification improvement, and DeLong test. The dataset was allocated into training (n = 172, Center 1), internal validation (n = 44, Center 1), and external test (n = 116, Centers 2-4) cohorts. Four optimal deep learning models and the best Transformer fusion model were developed. In the external test cohort, traditional radiomics model exhibited an AUC of 0.736 [95% confidence interval (CI): 0.645-0.826]. The‌ optimal deep learning imaging ‌module‌ showed superior AUC of 0.855 (95% CI: 0.777-0.934). The fusion model named Transformer_GoogLeNet further improved classification accuracy (AUC = 0.924, 95% CI: 0.875-0.973). The new method of fusing multichannel deep learning with the Transformer Encoder can accurately diagnose whether NSCLC patients receiving neoadjuvant immunochemotherapy will achieve MPR. Our findings may support improved surgical planning and contribute to better treatment outcomes through more accurate preoperative assessment.

Zhang X, Dong Z, Li H, Cheng Y, Tang W, Ni T, Zhang Y, Ai Q, Yang G

pubmed logopapersJul 2 2025
To develop and validate an ensemble machine learning ultrasound radiomics model for predicting drug resistance in lymph node tuberculosis (LNTB). This multicenter study retrospectively included 234 cervical LNTB patients from one center, randomly divided into training (70%) and internal validation (30%) cohorts. Radiomic features were extracted from ultrasound images, and an L1-based method was used for feature selection. A predictive model combining ensemble machine learning and AdaBoost algorithms was developed to predict drug resistance. Model performance was assessed using independent external test sets (Test A and Test B) from two other centres, with metrics including AUC, accuracy, precision, recall, F1 score, and decision curve analysis. Of the 851 radiometric features extracted, 161 were selected for the model. The model achieved AUCs of 0.998 (95% CI: 0.996-0.999), 0.798 (95% CI: 0.692-0.904), 0.846 (95% CI: 0.700-0.992), and 0.831 (95% CI: 0.688-0.974) in training, internal validation, and external test sets A and B, respectively. The decision curve analysis showed a substantial net benefit across a threshold probability range of 0.38 to 0.57. The LNTB resistance prediction model developed demonstrated high diagnostic efficacy in both internal and external validation. Radiomics, through the application of ensemble machine learning algorithms, provides new insights into drug resistance mechanisms and offers potential strategies for more effective patient treatment. Lymph node tuberculosis; Drug resistance; Ultrasound; Radiomics; Machine learning.

Feng L, Han H, Mo J, Huang Y, Huang K, Zhou C, Wang X, Zhang J, Yang Z, Liu D, Zhang K, Chen H, Liu Q, Li R

pubmed logopapersJul 2 2025
Surgical resection is an effective treatment for medically refractory mesial temporal lobe epilepsy (mTLE), however, more than one-third of patients fail to achieve seizure freedom after surgery. This study aimed to evaluate preoperative individual morphometric network characteristics and develop a machine learning model to predict surgical outcome in mTLE. This multicentre, retrospective study included 189 mTLE patients who underwent unilateral temporal lobectomy and 78 normal controls between February 2018 and June 2023. Postoperative seizure outcomes were categorized as seizure-free (SF, n = 125) or non-seizure-free (NSF, n = 64) at a minimum of one-year follow-up. The preoperative individualized structural covariance network (iSCN) derived from T1-weighted MRI was constructed for each patient by calculating deviations from the control-based reference distribution, and further divided into the surgery network and the surgically spared network using a standard resection mask by merging each patient's individual lacuna. Regional features were selected separately from bilateral, ipsilateral and contralateral iSCN abnormalities to train support vector machine models, validated in two independent external datasets. NSF patients showed greater iSCN deviations from the normative distribution in the surgically spared network compared to SF patients (P = 0.02). These deviations were widely distributed in the contralateral functional modules (P < 0.05, false discovery rate corrected). Seizure outcome was optimally predicted by the contralateral iSCN features, with an accuracy of 82% (P < 0.05, permutation test) and an area under the receiver operating characteristic curve (AUC) of 0.81, with the default mode and fronto-parietal areas contributing most. External validation in two independent cohorts showed accuracy of 80% and 88%, with AUC of 0.80 and 0.82, respectively, emphasizing the generalizability of the model. This study provides reliable personalized structural biomarkers for predicting surgical outcome in mTLE and has the potential to assist tailored surgical treatment strategies.

Roongruangsilp P, Narkbuakaew W, Khongkhunthian P

pubmed logopapersJul 2 2025
The integration of artificial intelligence (AI) in dental implant planning has emerged as a transformative approach to enhance diagnostic accuracy and efficiency. This study aimed to evaluate the performance of two object detection models, Faster R-CNN and YOLOv7 in analyzing cross-sectional and panoramic images derived from DICOM files processed by four distinct dental imaging software platforms. The dataset consisted of 332 implant position images derived from DICOM files of 184 CBCT scans. Three hundred images were processed using DentiPlan Pro 3.7 software (NECTEC, NSTDA, Thailand) for the development of Faster R-CNN and YOLOv7 models for dental implant planning. For model testing, 32 additional implant position images, which were not included in the training set, were processed using four different software programs: DentiPlan Pro 3.7, DentiPlan Pro Plus 5.0 (DTP; NECTEC, NSTDA, Thailand), Implastation (ProDigiDent USA, USA), and Romexis 6.0 (Planmeca, Finland). The performance of the models was evaluated using detection rate, accuracy, precision, recall, F1 score, and the Jaccard Index (JI). Faster R-CNN achieved superior accuracy across imaging modalities, while YOLOv7 demonstrated higher detection rates, albeit with lower precision. The impact of image rendering algorithms on model performance underscores the need for standardized preprocessing pipelines. Although Faster R-CNN demonstrated relatively higher performance metrics, statistical analysis revealed no significant differences between the models (p-value > 0.05). This study emphasizes the potential of AI-driven solutions in dental implant planning and advocates the need for further research in this area. The absence of statistically significant differences between Faster R-CNN and YOLOv7 suggests that both models can be effectively utilized, depending on the specific requirements for accuracy or detection. Furthermore, the variations in imaging rendering algorithms across different software platforms significantly influenced the model outcomes. AI models for DICOM analysis should rely on standardized image rendering to ensure consistent performance.

Yang S, Wu Y

pubmed logopapersJul 2 2025
To address misdiagnosis caused by feature coupling in multi-label medical image classification, this study introduces a chest X-ray pathology reasoning method. It combines hierarchical attention convolutional networks with a multi-label decoupling loss function. This method aims to enhance the precise identification of complex lesions. It dynamically captures multi-scale lesion morphological features and integrates lung field partitioning with lesion localization through a dual-path attention mechanism, thereby improving clinical disease prediction accuracy. An adaptive dilated convolution module with 3 × 3 deformable kernels dynamically captures multi-scale lesion features. A channel-space dual-path attention mechanism enables precise feature selection for lung field partitioning and lesion localization. Cross-scale skip connections fuse shallow texture and deep semantic information, enhancing microlesion detection. A KL divergence-constrained contrastive loss function decouples 14 pathological feature representations via orthogonal regularization, effectively resolving multi-label coupling. Experiments on ChestX-ray14 show a weighted F1-score of 0.97, Hamming Loss of 0.086, and AUC values exceeding 0.94 for all pathologies. This study provides a reliable tool for multi-disease collaborative diagnosis.

Wada A, Tanaka Y, Nishizawa M, Yamamoto A, Akashi T, Hagiwara A, Hayakawa Y, Kikuta J, Shimoji K, Sano K, Kamagata K, Nakanishi A, Aoki S

pubmed logopapersJul 2 2025
Large language models (LLMs) demonstrate significant potential in healthcare applications, but clinical deployment is limited by privacy concerns and insufficient medical domain training. This study investigated whether retrieval-augmented generation (RAG) can improve locally deployable LLM for radiology contrast media consultation. In 100 synthetic iodinated contrast media consultations we compared Llama 3.2-11B (baseline and RAG) with three cloud-based models-GPT-4o mini, Gemini 2.0 Flash and Claude 3.5 Haiku. A blinded radiologist ranked the five replies per case, and three LLM-based judges scored accuracy, safety, structure, tone, applicability and latency. Under controlled conditions, RAG eliminated hallucinations (0% vs 8%; χ²₍Yates₎ = 6.38, p = 0.012) and improved mean rank by 1.3 (Z = -4.82, p < 0.001), though performance gaps with cloud models persist. The RAG-enhanced model remained faster (2.6 s vs 4.9-7.3 s) while the LLM-based judges preferred it over GPT-4o mini, though the radiologist ranked GPT-4o mini higher. RAG thus provides meaningful improvements for local clinical LLMs while maintaining the privacy benefits of on-premise deployment.

Zhou J, Wei Y, Li X, Zhou W, Tao R, Hua Y, Liu H

pubmed logopapersJul 2 2025
Alzheimer's disease (AD) constitutes a neurodegenerative disorder predominantly observed in the geriatric population. If AD can be diagnosed early, both in terms of prevention and treatment, it is very beneficial to patients. Therefore, our team proposed a novel deep learning model named 3D-CNN-VSwinFormer. The model consists of two components: the first part is a 3D CNN equipped with a 3D Convolutional Block Attention Module (3D CBAM) module, and the second part involves a fine-tuned Video Swin Transformer. Our investigation extracts features from subject-level 3D Magnetic resonance imaging (MRI) data, retaining only a single 3D MRI image per participant. This method circumvents data leakage and addresses the issue of 2D slices failing to capture global spatial information. We utilized the ADNI dataset to validate our proposed model. In differentiating between AD patients and cognitively normal (CN) individuals, we achieved accuracy and AUC values of 92.92% and 0.9660, respectively. Compared to other studies on AD and CN recognition, our model yielded superior results, enhancing the efficiency of AD diagnosis.

Zhang Y, Wang L, Yuan D, Qi K, Zhang M, Zhang W, Gao J, Liu J

pubmed logopapersJul 2 2025
This study aims to assess the feasibility of "double-low," low radiation dosage and low contrast media dosage, CT pulmonary angiography (CTPA) based on deep-learning image reconstruction (DLIR) algorithms. One hundred consecutive patients (41 females; average age 60.9 years, range 18-90) were prospectively scanned on multi-detector CT systems. Fifty patients in the conventional-dose group (CD group) underwent CTPA with 100 kV protocol using the traditional iterative reconstruction algorithm, and 50 patients in the low-dose group (LD group) underwent CTPA with a 70 kVp DLIR protocol. Radiation and contrast agent doses were recorded and compared between groups. Objective parameters were measured and compared. Two radiologists evaluated images for overall image quality, artifacts, and image contrast separately on a 5-point scale. The furthest visible branches were compared between groups. Compared to the control group, the study group reduced the dose-length product by 80.3% (p < 0.01) and the contrast media dose by 33.3%. CT values, SD values, signal-to-noise ratio (SNR), and contrast-to-noise ratio (CNR) showed no statistically significant differences (all p > 0.05) between the LD and CD groups. The overall image quality scores were comparable between the LD and CD groups (p > 0.05), with good in-reader agreement (k = 0.75). More peripheral pulmonary vessels could be assessed in the LD group compared with the CD group. 70 kVp combined with DLIR reconstruction for CTPA can further reduce radiation and contrast agent dose while maintaining image quality and increasing the visibility on the pulmonary artery distal branches. Question Elevated radiation exposure and substantial doses of contrast media during CT pulmonary angiography (CTPA) augment patient risks. Findings The "double-low" CT pulmonary angiography protocol can diminish radiation doses by 80.3% and minimize contrast doses by one-third while maintaining image quality. Clinical relevance With deep learning algorithms, we confirmed that CTPA images maintained excellent quality despite reduced radiation and contrast dosages, helping to reduce radiation exposure and kidney burden on patients. The "double-low" CTPA protocol, complemented by deep learning image reconstruction, prioritizes patient safety.
Page 502 of 7527514 results
Show
per page

Ready to Sharpen Your Edge?

Subscribe to join 7,500+ peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.