Sort by:
Page 3 of 3813802 results

Ultra-fast whole-brain T2-weighted imaging in 7 seconds using dual-type deep learning reconstruction with single-shot acquisition: clinical feasibility and comparison with conventional methods.

Ikebe Y, Fujima N, Kameda H, Harada T, Shimizu Y, Kwon J, Yoneyama M, Kudo K

pubmed logopapersSep 26 2025
To evaluate the image quality and clinical utility of ultra-fast T2-weighted imaging (UF-T2WI), which acquires all slice data in 7 s using a single-shot turbo spin-echo technique combined with dual-type deep learning (DL) reconstruction, incorporating DL-based image denoising and super-resolution processing, by comparing UF-T2WI with conventional T2WI. We analyzed data from 38 patients who underwent both conventional T2WI and UF-T2WI with the dual-type DL-based image reconstruction. Two board-certified radiologists independently performed blinded qualitative assessments of the patients' images obtained with UF-T2WI with DL and conventional T2WI, evaluating the overall image quality, anatomical structure visibility, and levels of noise and artifacts. In cases that included central nervous system diseases, the lesions' delineation was also assessed. The quantitative analysis included measurements of signal-to-noise ratios in white and gray matter and the contrast-to-noise ratio between gray and white matter. Compared to conventional T2WI, UF-T2WI with DL received significantly higher ratings for overall image quality and lower noise and artifact levels (p < 0.001 for both readers). The anatomical visibility was significantly better in UF-T2WI for one reader, with no significant difference for the other reader. The lesion visibility in UF-T2WI was comparable to that in conventional T2WI. Quantitatively, the SNRs and CNRs were all significantly higher in UF-T2WI than conventional T2WI (p < 0.001). The combination of SSTSE with dual-type DL reconstruction allows for the acquisition of clinically acceptable T2WI images in just 7 s. This technique shows strong potential to reduce MRI scan times and improve clinical workflow efficiency.

Exploring learning transferability in deep segmentation of colorectal cancer liver metastases.

Abbas M, Badic B, Andrade-Miranda G, Bourbonne V, Jaouen V, Visvikis D, Conze PH

pubmed logopapersSep 26 2025
Ensuring the seamless transfer of knowledge and models across various datasets and clinical contexts is of paramount importance in medical image segmentation. This is especially true for liver lesion segmentation which plays a key role in pre-operative planning and treatment follow-up. Despite the progress of deep learning algorithms using Transformers, automatically segmenting small hepatic metastases remains a persistent challenge. This can be attributed to the degradation of small structures due to the intrinsic process of feature down-sampling inherent to many deep architectures, coupled with the imbalance between foreground metastases voxels and background. While similar challenges have been observed for liver tumors originated from hepatocellular carcinoma, their manifestation in the context of liver metastasis delineation remains under-explored and require well-defined guidelines. Through comprehensive experiments, this paper aims to bridge this gap and to demonstrate the impact of various transfer learning schemes from off-the-shelf datasets to a dataset containing liver metastases only. Our scale-specific evaluation reveals that models trained from scratch or with domain-specific pre-training demonstrate greater proficiency.

Prediction of neoadjuvant chemotherapy efficacy in patients with HER2-low breast cancer based on ultrasound radiomics.

Peng Q, Ji Z, Xu N, Dong Z, Zhang T, Ding M, Qu L, Liu Y, Xie J, Jin F, Chen B, Song J, Zheng A

pubmed logopapersSep 26 2025
Neoadjuvant chemotherapy (NAC) is a crucial therapeutic approach for treating breast cancer, yet accurately predicting treatment response remains a significant clinical challenge. Conventional ultrasound plays a vital role in assessing tumor morphology but lacks the ability to quantitatively capture intratumoral heterogeneity. Ultrasound radiomics, which extracts high-throughput quantitative imaging features, offers a novel approach to enhance NAC response prediction. This study aims to evaluate the predictive efficacy of ultrasound radiomics models based on pre-treatment, post-treatment, and combined imaging features for assessing the NAC response in patients with HER2-low breast cancer. This retrospective multicenter study included 359 patients with HER2-low breast cancer who underwent NAC between January 1, 2016, and December 31, 2020. A total of 488 radiomic features were extracted from pre- and post-treatment ultrasound images. Feature selection was conducted in two stages: first, Pearson correlation analysis (threshold: 0.65) was applied to remove highly correlated features and reduce redundancy; then, Recursive Feature Elimination with Cross-Validation (RFECV) was employed to identify the optimal feature subset for model construction. The dataset was divided into a training set (244 patients) and an external validation set (115 patients from independent centers). Model performance was assessed via the area under the receiver operating characteristic curve (AUC), accuracy, precision, recall, and F1 score. Three models were initially developed: (1) a pre-treatment model (AUC = 0.716), (2) a post-treatment model (AUC = 0.772), and (3) a combined pre- and post-treatment model (AUC = 0.762).To enhance feature selection, Recursive Feature Elimination with Cross-Validation was applied, resulting in optimized models with reduced feature sets: (1) the pre-treatment model (AUC = 0.746), (2) the post-treatment model (AUC = 0.712), and (3) the combined model (AUC = 0.759). Ultrasound radiomics is a non-invasive and promising approach for predicting response to neoadjuvant chemotherapy in HER2-low breast cancer. The pre-treatment model yielded reliable performance after feature selection. While the combined model did not substantially enhance predictive accuracy, its stable performance suggests that longitudinal ultrasound imaging may help capture treatment-induced phenotypic changes. These findings offer preliminary support for individualized therapeutic decision-making.

Performance of artificial intelligence in automated measurement of patellofemoral joint parameters: a systematic review.

Zhan H, Zhao Z, Liang Q, Zheng J, Zhang L

pubmed logopapersSep 26 2025
The evaluation of patellofemoral joint parameters is essential for diagnosing patellar dislocation, yet manual measurements exhibit poor reproducibility and demonstrate significant variability dependent on clinician expertise. This systematic review aimed to evaluate the performance of artificial intelligence (AI) models in automatically measuring patellofemoral joint parameters. A comprehensive literature search of PubMed, Web of Science, Cochrane Library, and Embase databases was conducted from database inception through June 15, 2025. Two investigators independently performed study screening and data extraction, with methodological quality assessment based on the modified MINORS checklist. This systematic review is registered with PROSPERO. A narrative review was conducted to summarize the findings of the included studies. A total of 19 studies comprising 10,490 patients met the inclusion and exclusion criteria, with a mean age of 51.3 years and a mean female proportion of 56.8%. Among these, six studies developed AI models based on radiographic series, nine on CT imaging, and four on MRI. The results demonstrated excellent reliability, with intraclass correlation coefficients (ICCs) ranging from 0.900 to 0.940 for femoral anteversion angle, 0.910-0.920 for trochlear groove depth and 0.930-0.950 for tibial tuberosity-trochlear groove distance. Additionally, good reliability was observed for patellar height (ICCs: 0.880-0.985), sulcus angle (ICCs: 0.878-0.980), and patellar tilt angle (ICCs: 0.790-0.990). Notably, the AI system successfully detected trochlear dysplasia, achieving 88% accuracy, 79% sensitivity, 96% specificity, and an AUC of 0.88. AI-based measurement of patellofemoral joint parameters demonstrates methodological robustness and operational efficiency, showing strong agreement with expert manual measurements. To further establish clinical utility, multicenter prospective studies incorporating rigorous external validation protocols are needed. Such validation would strengthen the model's generalizability and facilitate its integration into clinical decision support systems. This systematic review was registered in PROSPERO (CRD420251075068).

Segmental airway volume as a predictive indicator of postoperative extubation timing in patients with oral and maxillofacial space infections: a retrospective analysis.

Liu S, Shen H, Zhu B, Zhang X, Zhang X, Li W

pubmed logopapersSep 26 2025
The objective of this study was to investigate the significance of segmental airway volume in developing a predictive model to guide the timing of postoperative extubation in patients with oral and maxillofacial space infections (OMSIs). A retrospective cohort study was performed to analyse clinical data from 177 medical records, with a focus on key variables related to disease severity and treatment outcomes. The inclusion criteria of this study were as follows: adherence to the OMSI diagnostic criteria (local tissue inflammation characterized by erythema, oedema, hyperthermia and tenderness); compromised functions such as difficulties opening the mouth, swallowing, or breathing; the presence of purulent material confirmed by puncture or computed tomography (CT); and laboratory examinations indicating an underlying infection process. The data included age, sex, body mass index (BMI), blood test results, smoking history, history of alcohol abuse, the extent of mouth opening, the number of infected spaces, and the source of infection. DICOM files were imported into 3D Slicer for manual segmentation, followed by volume measurement of each segment. We observed statistically significant differences in age, neutrophil count, lymphocyte count, and C4 segment volume among patient subgroups stratified by extubation time. Regression analysis revealed that age and C4 segment volume were significantly correlated with extubation time. Additionally, the machine learning models yielded good evaluation metrics. Segmental airway volume shows promise as an indicator for predicting extubation time. Predictive models constructed using machine learning algorithms yield good predictive performance and may facilitate clinical decision-making.

Automated deep learning method for whole-breast segmentation in contrast-free quantitative MRI.

Gao W, Zhang Y, Gao B, Xia Y, Liang W, Yang Q, Shi F, He T, Han G, Li X, Su X, Zhang Y

pubmed logopapersSep 26 2025
To develop a deep learning segmentation method utilizing the nnU-Net architecture for fully automated whole-breast segmentation based on diffusion-weighted imaging (DWI) and synthetic MRI (SyMRI) images. A total of 98 patients with 196 breasts were evaluated. All patients underwent 3.0T magnetic resonance (MR) examinations, which incorporated DWI and SyMRI techniques. The ground truth for breast segmentation was established through a manual, slice-by-slice approach performed by two experienced radiologists. The U-Net and nnU-Net deep learning algorithms were employed to segment the whole-breast. Performance was evaluated using various metrics, including the Dice Similarity Coefficient (DSC), accuracy, and Pearson's correlation coefficient. For DWI and proton density (PD) of SyMRI, the nnU-Net outperformed the U-Net achieving the higher DSC in both the testing set (DWI, 0.930 ± 0.029 vs. 0.785 ± 0.161; PD, 0.969 ± 0.010 vs. 0.936 ± 0.018) and independent testing set (DWI, 0.953 ± 0.019 vs. 0.789 ± 0.148; PD, 0.976 ± 0.008 vs. 0.939 ± 0.018). The PD of SyMRI exhibited better performance than DWI, attaining the highest DSC and accuracy. The correlation coefficients R² for nnU-Net were 0.99 ~ 1.00 for DWI and PD, significantly surpassing the performance of U-Net. The nnU-Net exhibited exceptional segmentation performance for fully automated breast segmentation of contrast-free quantitative images. This method serves as an effective tool for processing large-scale clinical datasets and represents a significant advancement toward computer-aided quantitative analysis of breast DWI and SyMRI images.

Radiomics-based machine learning model integrating preoperative vertebral computed tomography and clinical features to predict cage subsidence after single-level anterior cervical discectomy and fusion with a zero-profile anchored spacer.

Zheng B, Yu P, Ma K, Zhu Z, Liang Y, Liu H

pubmed logopapersSep 26 2025
To develop machine-learning model that combines pre-operative vertebral-body CT radiomics with clinical data to predict cage subsidence after single-level ACDF with Zero-P. We retrospectively review 253 patients (2016-2023). Subsidence is defined as ≥ 3 mm loss of fused-segment height at final follow-up. Patients are split 8:2 into a training set (n = 202; 39 subsidence) and an independent test set (n = 51; 14 subsidence). Vertebral bodies adjacent to the target level are segmented on pre-operative CT, and high-throughput radiomic features are extracted with PyRadiomics. Features are z-score-normalized, then reduced by variance, correlation and LASSO. Age, vertebral Hounsfield units (HU) and T1-slope entered a clinical model. Eight classifiers are tuned by cross-validation; performance is assessed by AUC and related metrics, with thresholds optimized on the training cohort. Subsidence patients are older, lower HU and higher T1-slope (all P < 0.05). LASSO retained 11 radiomic features. In the independent test set, the clinical model had limited discrimination (AUC 0.595). The radiomics model improved performance (AUC 0.775; sensitivity 100%; specificity 60%). The combined model is best (AUC 0.813; sensitivity 80%; specificity 80%) and surpassed both single-source models (P < 0.05). A pre-operative model integrating CT-based radiomic signatures with key clinical variables predicts cage subsidence after ACDF with good accuracy. This tool may facilitate individualized risk stratification and guide strategies-such as endplate protection, implant choice and bone-quality optimization-to mitigate subsidence risk. Multicentre prospective validation is warranted.

Deep learning-based cardiac computed tomography angiography left atrial segmentation and quantification in atrial fibrillation patients: a multi-model comparative study.

Feng L, Lu W, Liu J, Chen Z, Jin J, Qian N, Pan J, Wang L, Xiang J, Jiang J, Wang Y

pubmed logopapersSep 26 2025
Quantitative assessment of left atrial volume (LAV) is an important factor in the study of the pathogenesis of atrial fibrillation. However, automated left atrial segmentation with quantitative assessment usually faces many challenges. The main objective of this study was to find the optimal left atrial segmentation model based on cardiac computed tomography angiography (CTA) and to perform quantitative LAV measurement. A multi-center left atrial study cohort containing 182 cardiac CTAs with atrial fibrillation was created, each case accompanied by expert image annotation by a cardiologist. Then, based on this left atrium dataset, five recent states-of-the-art (SOTA) models in the field of medical image segmentation were used to train and validate the left atrium segmentation model, including DAResUNet, nnFormer, xLSTM-UNet, UNETR, and VNet, respectively. Further, the optimal segmentation model was used to assess the consistency validation of the LAV. DAResUNet achieved the best performance in DSC (0.924 ± 0.023) and JI (0.859 ± 0.065) among all models, while VNet is the best performer in HD (12.457 ± 6.831) and ASD (1.034 ± 0.178). The Bland-Altman plot demonstrated the extremely strong agreement (mean bias - 5.69 mL, 95% LoA - 19-7.6 mL) between the model's automatic prediction and manual measurements. Deep learning models based on a study cohort of 182 CTA left atrial images were capable of achieving competitive results in left atrium segmentation. LAV assessment based on deep learning models may be useful for biomarkers of the onset of atrial fibrillation.

Pathomics-based machine learning models for optimizing LungPro navigational bronchoscopy in peripheral lung lesion diagnosis: a retrospective study.

Ying F, Bao Y, Ma X, Tan Y, Li S

pubmed logopapersSep 26 2025
To construct a pathomics-based machine learning model to enhance the diagnostic efficacy of LungPro navigational bronchoscopy for peripheral pulmonary lesions and to optimize the management strategy for LungPro-diagnosed negative lesions. Clinical data and hematoxylin and eosin (H&E)-stained whole slide images (WSIs) were collected from 144 consecutive patients undergoing LungPro virtual bronchoscopy at a single institution between January 2022 and December 2023. Patients were stratified into diagnosis-positive and diagnosis-negative cohorts based on histopathological or etiological confirmation. An artificial intelligence (AI) model was developed and validated using 94 diagnosis-positive cases. Logistic regression (LR) identified associations between clinical/imaging characteristics and malignant pulmonary lesion risk factors. We implemented a convolutional neural network (CNN) with weakly supervised learning to extract image-level features, followed by multiple instance learning (MIL) for patient-level feature aggregation. Multiple machine learning (ML) algorithms were applied to model the extracted features. A multimodal diagnostic framework integrating clinical, imaging, and pathomics data were subsequently developed and evaluated on 50 LungPro-negative patients to assess the framework's diagnostic performance and predictive validity. Univariable and multivariable logistic regression analyses identified that age, lesion boundary and mean computed tomography (CT) attenuation were independent risk factors for malignant peripheral pulmonary lesions (P < 0.05). A histopathological model using a MIL fusion strategy showed strong diagnostic performance for lung cancer, with area under the curve (AUC) values of 0.792 (95% CI 0.680-0.903) in the training cohort and 0.777 (95% CI 0.531-1.000) in the test cohort. Combining predictive clinical features with pathological characteristics enhanced diagnostic yield for peripheral pulmonary lesions to 0.848 (95% CI 0.6945-1.0000). In patients with initially negative LungPro biopsy results, the model identified 20 of 28 malignant lesions (sensitivity: 71.43%) and 15 of 22 benign lesions (specificity: 68.18%). Class activation mapping (CAM) validated the model by highlighting key malignant features, including conspicuous nucleoli and nuclear atypia. The fusion diagnostic model that incorporates clinical and pathomic features markedly enhances the diagnostic accuracy of LungPro in this retrospective cohort. This model aids in the detection of subtle malignant characteristics, thereby offering evidence to support precise and targeted therapeutic interventions for lesions that LungPro classifies as negative in clinical settings.

MedIENet: medical image enhancement network based on conditional latent diffusion model.

Yuan W, Feng Y, Wen T, Luo G, Liang J, Sun Q, Liang S

pubmed logopapersSep 26 2025
Deep learning necessitates a substantial amount of data, yet obtaining sufficient medical images is difficult due to concerns about patient privacy and high collection costs. To address this issue, we propose a conditional latent diffusion model-based medical image enhancement network, referred to as the Medical Image Enhancement Network (MedIENet). To meet the rigorous standards required for image generation in the medical imaging field, a multi-attention module is incorporated in the encoder of the denoising U-Net backbone. Additionally Rotary Position Embedding (RoPE) is integrated into the self-attention module to effectively capture positional information, while cross-attention is utilised to embed integrate class information into the diffusion process. MedIENet is evaluated on three datasets: Chest CT-Scan images, Chest X-Ray Images (Pneumonia), and Tongue dataset. Compared to existing methods, MedIENet demonstrates superior performance in both fidelity and diversity of the generated images. Experimental results indicate that for downstream classification tasks using ResNet50, the Area Under the Receiver Operating Characteristic curve (AUROC) achieved with real data alone is 0.76 for the Chest CT-Scan images dataset, 0.87 for the Chest X-Ray Images (Pneumonia) dataset, and 0.78 for the Tongue Dataset. When using mixed data consisting of real data and generated data, the AUROC improves to 0.82, 0.94, and 0.82, respectively, reflecting increases of approximately 6%, 7%, and 4%. These findings indicate that the images generated by MedIENet can enhance the performance of downstream classification tasks, providing an effective solution to the scarcity of medical image training data.
Page 3 of 3813802 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.