Sort by:
Page 145 of 6486473 results

Chen Q, Zhang Q, Li Z, Zhang S, Xia Y, Wang H, Lu Y, Zheng A, Shao C, Shen F

pubmed logopapersSep 22 2025
To investigate MRI-based habitat analysis for its value in predicting pathologic response following neoadjuvant chemoradiotherapy (nCRT) in rectal cancer (RC) patients. 1021 RC patients in three hospitals were divided into the training and test sets (n = 319), the internal validation set (n = 317), and external validation sets 1 (n = 158) and 2 (n = 227). Deep learning was performed to automatically segment the entire lesion on high-resolution MRI. Simple linear iterative clustering was used to divide each tumor into subregions, from which radiomics features were extracted. The optimal number of clusters reflecting the diversity of the tumor ecosystem was determined. Finally, four models were developed: clinical, intratumoral heterogeneity (ITH)-based, radiomics, and fusion models. The performance of these models was evaluated. The impact of nCRT on disease-free survival (DFS) was further analyzed. The Delong test revealed the fusion model (AUCs of 0.867, 0.851, 0.852, and 0.818 in the four cohorts, respectively), the radiomics model (0.831, 0.694, 0.753, and 0.705, respectively), and the ITH model (0.790, 0.786, 0.759, and 0.722, respectively) were all superior to the clinical model (0.790, 0.605, 0.735, and 0.704, respectively). However, no significant differences were detected between the fusion and ITH models. Patients stratified using the fusion model showed significant differences in DFS between the good and poor response groups (all p < 0.05 in the four sets). The fusion model combining clinical factors, radiomics features, and ITH features may help predict pathologic response in RC cases receiving nCRT. Question Identifying rectal cancer (RC) patients likely to benefit from neoadjuvant chemoradiotherapy (nCRT) before treatment is crucial. Findings The fusion model shows the best performance in predicting response after neoadjuvant chemoradiotherapy. Clinical relevance The fusion model integrates clinical characteristics, radiomics features, and intratumoral heterogeneity (ITH)features, which can be applied for the prediction of response to nCRT in RC patients, offering potential benefits in terms of personalized treatment strategies.

Wang G, Fu J, Wu J, Luo X, Zhou Y, Liu X, Li K, Lin J, Shen B, Zhang S

pubmed logopapersSep 22 2025
The performance of deep learning models for medical image segmentation is often limited in scenarios where training data or annotations are limited. Self-Supervised Learning (SSL) is an appealing solution for this dilemma due to its feature learning ability from a large amount of unannotated images. Existing SSL methods have focused on pretraining either an encoder for global feature representation or an encoder-decoder structure for image restoration, where the gap between pretext and downstream tasks limits the usefulness of pretrained decoders in downstream segmentation. In this work, we propose a novel SSL strategy named Volume Fusion (VolF) for pretraining 3D segmentation models. It minimizes the gap between pretext and downstream tasks by introducing a pseudo-segmentation pretext task, where two sub-volumes are fused by a discretized block-wise fusion coefficient map. The model takes the fused result as input and predicts the category of fusion coefficient for each voxel, which can be trained with standard supervised segmentation loss functions without manual annotations. Experiments with an abdominal CT dataset for pretraining and both in-domain and out-domain downstream datasets showed that VolF led to large performance gain from training from scratch with faster convergence speed, and outperformed several state-of-the-art SSL methods. In addition, it is general to different network structures, and the learned features have high generalizability to different body parts and modalities.

Li YT, Chen DY, Kuo DP, Chen YC, Cheng SJ, Hsieh LC, Chiang YH, Chen CY

pubmed logopapersSep 22 2025
Persistent working memory decline (PWMD) is a common sequela of mild traumatic brain injury (mTBI), yet reliable biomarkers for predicting long-term working memory outcomes remain lacking. The glymphatic system, a brain-wide waste clearance network, plays a crucial role in cognitive recovery. The diffusion tensor imaging analysis along the perivascular space (DTI-ALPS) index, a noninvasive magnetic resonance imaging (MRI)-based technique, offers a promising approach to evaluate perivascular fluid dynamics-a key component of glymphatic function. However, its role in long-term working memory dysfunction remains underexplored, particularly in the presence of traumatic cerebral microbleeds (CMBs) and poor sleep quality-as measured by Pittsburgh Sleep Quality Index (PSQI)-both of which have been suggested to disrupt glymphatic clearance, exacerbate neurovascular impairment, and contribute to cognitive decline. This study aims to investigate the interplay between CMBs, sleep quality, and perivascular fluid dynamics in predicting PWMD after mTBI. We further assess the feasibility of a machine learning-based approach to enhance individualized working memory outcome prediction. Between September 2015 and October 2022, 3,068 patients presenting with concussion were screened, and 471 met the inclusion criteria for mTBI. A total of 184 patients provided informed consent, and 61 completed both baseline and 1-year follow-up assessments. In addition, 61 demographically matched healthy controls were recruited. Susceptibility-weighted imaging was used to detect CMBs, while perivascular fluid dynamics was assessed using the DTI-ALPS index. Sleep quality was evaluated using the PSQI, and working memory was measured with the Digit Span test at baseline and 1-year post-injury. Mediation analysis was conducted to examine the indirect effects of perivascular fluid dynamics on cognitive outcomes, and a machine learning model incorporating DTI-ALPS, CMBs, sleep quality, and baseline cognitive scores was developed for individualized prediction. CMBs were present in 29.5% of mTBI patients and were associated with significantly lower DTI-ALPS index values (<i>p</i> < 0.001), suggesting compromised perivascular fluid dynamics and glymphatic impairment. Poor sleep quality (PSQI > 8) correlated with lower 1-year Digit Span scores (<i>r</i> = -0.551, <i>p</i> < 0.001), supporting the link between disrupted glymphatic function and cognitive decline. Mediation analysis revealed that the DTI-ALPS index partially mediated the relationship between CMBs and PWMD (Sobel test, <i>p</i> = 0.031). Machine learning-based predictive modeling achieved a high accuracy in forecasting 1-year working memory outcomes (<i>R</i><sup>2</sup> = 0.78). These findings highlight the potential of noninvasive MRI-based assessment of perivascular fluid dynamics as an early biomarker for PWMD. Given the essential role of the glymphatic system in sleep and memory, integrating DTI-ALPS with CMB detection and sleep quality evaluation may enhance prognostic accuracy and inform personalized rehabilitation strategies for mTBI patients.

Fukuda M, Nomoto D, Nozawa M, Kise Y, Kuwada C, Kubo H, Ariji E, Ariji Y

pubmed logopapersSep 22 2025
This study aimed to identify the most effective diagnostic assistance system for assessing the relationship between mandibular third molars (M3M) and mandibular canals (MC) using panoramic radiographs. In total, 2,103 M3M were included from patients in whom the M3M and MC overlapped on panoramic radiographs. All M3M were classified into high-risk and low-risk groups based on the degree of contact with the MC observed on computed tomography. The contact classification was evaluated using four machine learning models (Prediction One software, AdaBoost, XGBoost, and random forest), three convolutional neural networks (CNNs) (EfficientNet-B0, ResNet18, and Inception v3), and three human observers (two radiologists and one oral surgery resident). Receiver operating characteristic curves were plotted; the area under the curve (AUC), accuracy, sensitivity, and specificity were calculated. Factors contributing to prediction of high-risk cases by machine learning models were identified. Machine learning models demonstrated AUC values ranging from 0.84 to 0.88, with accuracy ranging from 0.81 to 0.88 and sensitivity of 0.80, indicating consistently strong performance. Among the CNNs, ResNet18 achieved the best performance, with an AUC of 0.83. The human observers exhibited AUC values between 0.67 and 0.80. Three factors were identified as contributing to prediction of high-risk cases by machine learning models: increased root radiolucency, diversion of the MC, and narrowing of the MC. Machine learning models demonstrated strong performance in predicting the three-dimensional relationship between the M3M and MC.

Bai X, Wu Z, Lu L, Zhang H, Zheng H, Zhang Y, Liu X, Zhang Z, Zhang G, Zhang D, Jin Z, Sun H

pubmed logopapersSep 22 2025
To develop a deep-learning model for segmenting and classifying adrenal nodules as either lipid-poor adenoma (LPA) or nodular hyperplasia (NH) on contrast-enhanced computed tomography (CECT) images. This retrospective dual-center study included 164 patients (median age 51.0 years; 93 females) with pathologically confirmed LPA or NH. The model was trained on 128 patients from the internal center and validated on 36 external cases. Radiologists annotated adrenal glands and nodules on 1-mm portal-venous phase CT images. We proposed Mamba-USeg, a novel state-space models (SSMs)-based multi-class segmentation method that performs simultaneous segmentation and classification. Performance was evaluated using the mean Dice similarity coefficient (mDSC) for segmentation and sensitivity/specificity for classification, with comparisons made against MultiResUNet and CPFNet. From per-slice segmentation, the model yielded an mDSC of 0.855 for the adrenal gland; for nodule segmentation, it achieved mDSCs of 0.869 (LPA) and 0.863 (NH), significantly outperforming two previous models-MultiResUNet (LPA, p < 0.001; NH, p = 0.014) and CPFNet (LPA, p = 0.003; NH, p = 0.023). Classification performance from per slice demonstrated sensitivity of 95.3% (95% confidence interval [CI] 91.3-96.6%) and specificity of 92.7% (95% CI: 91.9-93.6%) for LPA, and sensitivity of 94.2% (95% CI: 89.7-97.7%) and specificity of 91.5% (95% CI: 90.4-92.4%) for NH. The classification accuracy for patients from external sources was 91.7% (95% CI: 76.8-98.9%). The proposed multi-class segmentation model can accurately segment and differentiate between LPA and NH on CECT images, demonstrating superior performance to existing methods. Question Accurate differentiation between LPA and NH on imaging remains clinically challenging yet critically important for guiding appropriate treatment approaches. Findings Mamba-Useg, a multi-class segmentation model utilizing pixel-level analysis and majority voting strategies, can accurately segment and classify adrenal nodules as LPA or NH. Clinical relevance The proposed multi-class segmentation model can simultaneously segment and classify adrenal nodules, outperforming previous models in accuracy; it significantly aids clinical decision-making and thereby reduces unnecessary surgeries in adrenal hyperplasia patients.

Yamamoto A, Sato S, Ueda D, Walston SL, Kageyama K, Jogo A, Nakano M, Kotani K, Uchida-Kobayashi S, Kawada N, Miki Y

pubmed logopapersSep 22 2025
The purpose of this study was to establish a predictive deep learning (DL) model for clinically significant portal hypertension (CSPH) based on a single cross-sectional non-contrast CT image and to compare four representative positional images to determine the most suitable for the detection of CSPH. The study included 421 patients with chronic liver disease who underwent hepatic venous pressure gradient measurement at our institution between May 2007 and January 2024. Patients were randomly classified into training, validation, and test datasets at a ratio of 8:1:1. Non-contrast cross-sectional CT images from four target areas of interest were used to create four deep-learning-based models for predicting CSPH. The areas of interest were the umbilical portion of the portal vein (PV), the first right branch of the PV, the confluence of the splenic vein and PV, and the maximum cross-section of the spleen. The models were implemented using convolutional neural networks with a multilayer perceptron as the classifier. The model with the best predictive ability for CSPH was then compared to 13 conventional evaluation methods. Among the four areas, the umbilical portion of the PV had the highest predictive ability for CSPH (area under the curve [AUC]: 0.80). At the threshold maximizing the Youden index, sensitivity and specificity were 0.867 and 0.615, respectively. This DL model outperformed the ANTICIPATE model. We developed an algorithm that can predict CSPH immediately from a single slice of non-contrast CT, using the most suitable image of the umbilical portion of the PV. Question CSPH predicts complications but requires invasive hepatic venous pressure gradient measurement for diagnosis. Findings At the threshold maximizing the Youden index, sensitivity and specificity were 0.867 and 0.615, respectively. This DL model outperformed the ANTICIPATE model. Clinical relevance This study shows that a DL model can accurately predict CSPH from a single non-contrast CT image, providing a non-invasive alternative to invasive methods and aiding early detection and risk stratification in chronic liver disease without image manipulation.

Yang S, Hua Z, Chen Y, Liu L, Wang Z, Cheng Y, Wang J, Xu Z, Chen C

pubmed logopapersSep 22 2025
To address the unmet clinical need for validated risk stratification tools in salvage CT-guided percutaneous lung biopsy (PNLB) following nondiagnostic transbronchial lung biopsy (TBLB). We aimed to develop machine learning models predicting severe adverse events (SAEs) in PNLB (Model 1) and diagnostic success of salvage PNLB post-TBLB failure (Model 2). This multicenter predictive modeling study enrolled 2910 cases undergoing PNLB across two centers (Center 1: n = 2653 (2016-2020); Center 2: n = 257 (2017-2022)) with complete imaging and clinical documentation meeting predefined inclusion and exclusion criteria. Key variables were selected via LASSO regression, followed by development and validation of Model 1 (incorporating sex, smoking, pleural contact, lesion size, and puncture depth) and Model 2 (including age, lesion size, lesion characteristics, and post-bronchoscopic pathological categories (PBPCs)) using ten machine learning algorithms. Model performance was rigorously evaluated through discrimination metrics, calibration curves, and decision curve analysis to assess clinical applicability. A total of 2653 and 257 PNLB cases were included from two centers, where Model 1 achieved external validation ROC-AUC 0.717 (95% CI: 0.609-0.825) and PR-AUC 0.258 (95% CI: 0.0365-0.708), while Model 2 exhibited ROC-AUC 0.884 (95% CI: 0.784-0.984) and PR-AUC 0.852 (95% CI: 0.784-0.896), with XGBoost outperforming other algorithms. The dual XGBoost system stratifies salvage PNLB candidates by quantifying SAE risks (AUC = 0.717) versus diagnostic yield (AUC = 0.884), addressing the unmet need for personalized biopsy pathway optimization. Question Current tools cannot quantify severe adverse event (SAE) risks versus salvage diagnostic success for CT-guided lung biopsy (PNLB) after failed transbronchial biopsy (TBLB). Findings Dual XGBoost models successfully predicted the risks of PNLB SAEs (AUC = 0.717) and diagnostic success post-TBLB failure (AUC = 0.884) with validated clinical stratification benefits. Clinical relevance The dual XGBoost system guides clinical decision-making by integrating individual risk of SAEs with predictors of diagnostic success, enabling personalized salvage biopsy strategies that balance safety and diagnostic yield.

Yang R, Liu J, Li L, Fan Y, Shu Y, Wu W, Shu J

pubmed logopapersSep 22 2025
Constructing a multi-task global decision support system based on preoperative enhanced CT features to predict the mismatch repair (MMR) status, T stage, and pathological risk factors (e.g., histological differentiation, lymphovascular invasion) for patients with non-metastatic colon cancer. 372 eligible non-metastatic colon cancer (NMCC) participants (training cohort: n = 260; testing cohort: n = 112) were enrolled from two institutions. The 34 features (imaging features: n = 27; clinical features: n = 7) were subjected to feature selection using LASSO, Boruta, ReliefF, mRMR, and XGBoost-RFE, respectively. In each of the three categories-MMR, pT staging, and pathological risk factors-four features were selected to construct the total feature set. Subsequently, the multitask model was built with 14 machine learning algorithms. The predictive performance of the machine model was evaluated using the area under the receiver operating characteristic curve (AUC). The final feature set for constructing the model is based on the mRMR feature screening method. For the final MMR classification, pT staging, and pathological risk factors, SVC, Bernoulli NB, and Decision Tree algorithm were selected respectively, with AUC scores of 0.80 [95% CI 0.71-0.89], 0.82 [95% CI 0.71-0.94], and 0.85 [95% CI 0.77-0.93] on the test set. Furthermore, a direct multiclass model constructed using the total feature set resulted in an average AUC of 0.77 across four management plans in the test set. The multi-task machine learning model proposed in this study enables non-invasive and precise preoperative stratification of patients with NMCC based on MMR status, pT stage, and pathological risk factors. This predictive tool demonstrates significant potential in facilitating preoperative risk stratification and guiding individualized therapeutic strategies.

Atakır K, Işın K, Taş A, Önder H

pubmed logopapersSep 22 2025
This study aimed to evaluate the diagnostic accuracy of Chat Generative Pre-trained Transformer (ChatGPT) version 4 Omni (ChatGPT-4o) in radiology across seven information input combinations (image, clinical data, and multiple-choice options) to assess the consistency of its outputs across repeated trials and to compare its performance with that of human radiologists. We tested 129 distinct radiology cases under seven input conditions (varying presence of imaging, clinical context, and answer options). Each case was processed by ChatGPT-4o for seven different input combinations on three separate accounts. Diagnostic accuracy was determined by comparison with ground-truth diagnoses, and interobserver consistency was measured using Fleiss' kappa. Pairwise comparisons were performed with the Wilcoxon signed-rank test. Additionally, the same set of cases was evaluated by nine radiology residents to benchmark ChatGPT-4o's performance against human diagnostic accuracy. ChatGPT-4o's diagnostic accuracy was lowest for "image only" (19.90%) and "options only" (20.67%) conditions. The highest accuracy was observed in "image + clinical information + options" (80.88%) and "clinical information + options" (75.45%) conditions. The highest interobserver agreement was observed in the "image + clinical information + options" condition (κ = 0.733) and the lowest was in the "options only" condition (κ = 0.023), suggesting that more information improves consistency. However, there was no effective benefit of adding imaging data over already provided clinical data and options, as seen in post-hoc analysis. In human comparison, ChatGPT-4o outperformed radiology residents in text-based configurations (75.45% vs. 42.89%), whereas residents showed slightly better performance in image-based tasks (64.13% vs. 61.24%). Notably, when residents were allowed to use ChatGPT-4o as a support tool, their image-based diagnostic accuracy increased from 63.04% to 74.16%. ChatGPT-4o performs well when provided with rich textual input but remains limited in purely image- based diagnoses. Its accuracy and consistency increase with multimodal input, yet adding imaging does not significantly improve performance beyond clinical context and diagnostic options alone. The model's superior performance to residents in text-based tasks underscores its potential as a diagnostic aid in structured scenarios. Furthermore, its integration as a support tool may enhance human diagnostic accuracy, particularly in image-based interpretation. Although ChatGPT-4o is not yet capable of reliably interpreting radiologic images on its own, it demonstrates strong performance in text-based diagnostic reasoning. Its integration into clinical workflows-particularly for triage, structured decision support, or educational purposes-may augment radiologists' diagnostic capacity and consistency.

Fonseca FJPO, Matias BBR, Pacheco P, Muraoka CSAS, Silva EVF, Sesma N

pubmed logopapersSep 22 2025
This case report elucidates the application of an integrated digital workflow in which diagnosis, planning, and execution were enhanced by artificial intelligence (AI), enabling an assertive interdisciplinary esthetic-functional rehabilitation. With AI-powered software, the sequence from orthodontic treatment to the final rehabilitation achieved high predictability, addressing patient's chief complaints. A patient presented with a missing maxillary left central incisor (tooth 11) and dissatisfaction with a removable partial denture. Clinical examination revealed a gummy smile, a deviated midline, and a disproportionate mesiodistal space relative to the midline. Initial documentation included photographs, intraoral scanning, and cone-beam computed tomography of the maxilla. These data were integrated into a digital planning software to create an interdisciplinary plan. This workflow included prosthetically guided orthodontic treatment with aligners, a motivational mockup, guided implant surgery, peri-implant soft tissue management, and final prosthetic rehabilitation using a CAD/CAM approach. This digital workflow enhanced communication among the multidisciplinary team and with the patient, ensuring highly predictable esthetic and functional outcomes. Comprehensive digital workflows improve diagnostic accuracy, streamline planning with AI, and facilitate patient understanding. This approach increases patient satisfaction, supports interdisciplinary collaboration, and promotes treatment adherence.
Page 145 of 6486473 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.