Sort by:
Page 8 of 3093083 results

Objective Task-Based Evaluation of Quantitative Medical Imaging Methods: Emerging Frameworks and Future Directions.

Liu Y, Xia H, Obuchowski NA, Laforest R, Rahmim A, Siegel BA, Jha AK

pubmed logopapersAug 19 2025
Quantitative imaging (QI) holds significant potential across diverse clinical applications. For clinical translation of QI, rigorous evaluation on clinically relevant tasks is essential. This article outlines 4 emerging evaluation frameworks, including virtual imaging trials, evaluation with clinical data in the absence of ground truth, evaluation for joint detection and quantification tasks, and evaluation of QI methods that output multidimensional outputs. These frameworks are presented in the context of recent advancements in PET, such as long axial field of view PET and the development of artificial intelligence algorithms for PET. We conclude by discussing future research directions for evaluating QI methods.

A state-of-the-art new method for diagnosing atrial septal defects with origami technique augmented dataset and a column-based statistical feature extractor.

Yaman I, Kilic I, Yaman O, Poyraz F, Erdem Kaya E, Ozgur Baris V, Ciris S

pubmed logopapersAug 19 2025
Early diagnosis of atrial septal defects (ASDs) from chest X-ray (CXR) images with high accuracy is vital. This study created a dataset from chest X-ray images obtained from different adult subjects. To diagnose atrial septal defects with very high accuracy, which we call state-of-the-art technology, the method known as the Origami paper folding technique, which was used for the first time in the literature on our dataset, was used for data augmentation. Two different augmented data sets were obtained using the Origami technique. The mean, standard deviation, median, variance, and skewness statistical values were obtained column-wise on the images in these data sets. These features were classified with a Support vector machine (SVM). The results obtained using the support vector machine were evaluated according to the k-nearest neighbors (k-NN) and decision tree classifiers for comparison. The results obtained from the classification of the data sets augmented with the Origami technique with the support vector machine (SVM) are state-of-the-art (99.69 %). Our study has provided a clear superiority over deep learning-based artificial intelligence methods.

Interpreting convolutional neural network explainability for head-and-neck cancer radiotherapy organ-at-risk segmentation.

Strijbis VIJ, Gurney-Champion OJ, Grama DI, Slotman BJ, Verbakel WFAR

pubmed logopapersAug 19 2025
Convolutional neural networks (CNNs) have emerged to reduce clinical resources and standardize auto-contouring of organs-at-risk (OARs). Although CNNs perform adequately for most patients, understanding when the CNN might fail is critical for effective and safe clinical deployment. However, the limitations of CNNs are poorly understood because of their black-box nature. Explainable artificial intelligence (XAI) can expose CNNs' inner mechanisms for classification. Here, we investigate the inner mechanisms of CNNs for segmentation and explore a novel, computational approach to a-priori flag potentially insufficient parotid gland (PG) contours. First, 3D UNets were trained in three PG segmentation situations using (1) synthetic cases; (2) 1925 clinical computed tomography (CT) scans with typical and (3) more consistent contours curated through a previously validated auto-curation step. Then, we generated attribution maps for seven XAI methods, and qualitatively assessed them for congruency between simulated and clinical contours, and how much XAI agreed with expert reasoning. To objectify observations, we explored persistent homology intensity filtrations to capture essential topological characteristics of XAI attributions. Principal component (PC) eigenvalues of Euler characteristic profiles were correlated with spatial agreement (Dice-Sørensen similarity coefficient; DSC). Evaluation was done using sensitivity, specificity and the area under receiver operating characteristic (AUROC) curve on an external AAPM dataset, where as proof-of-principle, we regard the lowest 15% DSC as insufficient. PatternNet attributions (PNet-A) focused on soft-tissue structures, whereas guided backpropagation (GBP) highlighted both soft-tissue and high-density structures (e.g. mandible bone), which was congruent with synthetic situations. Both methods typically had higher/denser activations in better auto-contoured medial and anterior lobes. Curated models produced "cleaner" gradient class-activation mapping (GCAM) attributions. Quantitative analysis showed that PCλ<sub>1</sub> of guided GCAM's (GGCAM) Euler characteristic (EC) profile had good predictive value (sensitivity>0.85, specificity>0.90) of DSC for AAPM cases, with AUROC = 0.66, 0.74, 0.94, 0.83 for GBP, GCAM, GGCAM and PNet-A. For for λ<sub>1</sub> < -1.8e3 of GGCAM's EC-profile, 87% of cases were insufficient. GBP and PNet-A qualitatively agreed most with expert reasoning on directly (structure borders) and indirectly (proxies used for identifying structure borders) important features for PG segmentation. Additionally, this work investigated as proof-of-principle how topological data analysis could be used for quantitative XAI signal analysis to a-priori mark potentially inadequate CNN-segmentations, using only features from inside the predicted PG. This work used PG as a well-understood segmentation paradigm and may extend to target volumes and other organs-at-risk.

CT-based auto-segmentation of multiple target volumes for all-in-one radiotherapy in rectal cancer patients.

Li X, Wang L, Yang M, Li X, Zhao T, Wang M, Lu S, Ji Y, Zhang W, Jia L, Peng R, Wang J, Wang H

pubmed logopapersAug 19 2025
This study aimed to evaluate the clinical feasibility and performance of CT-based auto-segmentation models integrated into an All-in-One radiotherapy workflow for rectal cancer. This study included 312 rectal cancer patients, with 272 used to train three nnU-Net models for CTV45, CTV50, and GTV segmentation, and 40 for evaluation across one internal (<i>n</i> = 10), one clinical AIO (<i>n</i> = 10), and two external cohorts (<i>n</i> = 10 each). Segmentation accuracy (DSC, HD, HD95, ASSD, ASD) and time efficiency were assessed. In the internal testing set, mean DSC of CTV45, CTV50, and GTV were 0.90, 0.86, and 0.71; HD were 17.08, 25.48, and 79.59 mm; HD 95 were 4.89, 7.33, and 56.49 mm; ASSD were 1.23, 1.90, and 6.69 mm; and ASD were 1.24, 1.58, and 11.61 mm. Auto-segmentation reduced manual delineation time by 63.3–88.3% (<i>p</i> < 0.0001). In clinical practice, average DSC of CTV45, CTV50 and GTV were 0.93, 0.88, and 0.78; HD were 13.56, 23.84, and 35.38 mm; HD 95 were 3.33, 6.46, and 21.34 mm; ASSD were 0.78, 1.49, and 3.30 mm; and ASD were 0.74, 1.18, and 2.13 mm. The results from the multi-center testing also showed applicability of these models, since the average DSC of CTV45 and GTV were 0.84 and 0.80 respectively. The models demonstrated high accuracy and clinical utility, effectively streamlining target volume delineation and reducing manual workload in routine practice. The study protocol was approved by the Institutional Review Board of Peking University Third Hospital (Approval No. (2024) Medical Ethics Review No. 182-01).

Deep learning for detection and diagnosis of intrathoracic lymphadenopathy from endobronchial ultrasound multimodal videos: A multi-center study.

Chen J, Li J, Zhang C, Zhi X, Wang L, Zhang Q, Yu P, Tang F, Zha X, Wang L, Dai W, Xiong H, Sun J

pubmed logopapersAug 19 2025
Convex probe endobronchial ultrasound (CP-EBUS) ultrasonographic features are important for diagnosing intrathoracic lymphadenopathy. Conventional methods for CP-EBUS imaging analysis rely heavily on physician expertise. To overcome this obstacle, we propose a deep learning-aided diagnostic system (AI-CEMA) to automatically select representative images, identify lymph nodes (LNs), and differentiate benign from malignant LNs based on CP-EBUS multimodal videos. AI-CEMA is first trained using 1,006 LNs from a single center and validated with a retrospective study and then demonstrated with a prospective multi-center study on 267 LNs. AI-CEMA achieves an area under the curve (AUC) of 0.8490 (95% confidence interval [CI], 0.8000-0.8980), which is comparable to experienced experts (AUC, 0.7847 [95% CI, 0.7320-0.8373]; p = 0.080). Additionally, AI-CEMA is successfully transferred to a pulmonary lesion diagnosis task and obtains a commendable AUC of 0.8192 (95% CI, 0.7676-0.8709). In conclusion, AI-CEMA shows great potential in clinical diagnosis of intrathoracic lymphadenopathy and pulmonary lesions by providing automated, noninvasive, and expert-level diagnosis.

Longitudinal CE-MRI-based Siamese network with machine learning to predict tumor response in HCC after DEB-TACE.

Wei N, Mathy RM, Chang DH, Mayer P, Liermann J, Springfeld C, Dill MT, Longerich T, Lurje G, Kauczor HU, Wielpütz MO, Öcal O

pubmed logopapersAug 19 2025
Accurate prediction of tumor response after drug-eluting beads transarterial chemoembolization (DEB-TACE) remains challenging in hepatocellular carcinoma (HCC), given tumor heterogeneity and dynamic changes over time. Existing prediction models based on single timepoint imaging do not capture dynamic treatment-induced changes. This study aims to develop and validate a predictive model that integrates deep learning and machine learning algorithms on longitudinal contrast-enhanced MRI (CE-MRI) to predict treatment response in HCC patients undergoing DEB-TACE. This retrospective study included 202 HCC patients treated with DEB-TACE from 2004 to 2023, divided into a training cohort (<i>n</i> = 141) and validation cohort (<i>n</i> = 61). Radiomics and deep learning features were extracted from standardized longitudinal CE-MRI to capture dynamic tumor changes. Feature selection involved correlation analysis, minimum redundancy maximum relevance, and least absolute shrinkage and selection operator regression. The patients were categorized into two groups: the objective response group (<i>n</i> = 123, 60.9%; complete response = 35, 28.5%; partial response = 88, 71.5%) and the non-response group (<i>n</i> = 79, 39.1%; stable disease = 62, 78.5%; progressive disease = 17, 21.5%). Predictive models were constructed using radiomics, deep learning, and integrated features. The area under the receiver operating characteristic curve (AUC) was used to evaluate the performance of the models. We retrospectively evaluated 202 patients (62.67 ± 9.25 years old) with HCC treated after DEB-TACE. A total of 7,182 radiomics features and 4,096 deep learning features were extracted from the longitudinal CE-MRI images. The integrated model was developed using 13 quantitative radiomics features and 4 deep learning features and demonstrated acceptable and robust performance with an receiver operating characteristic curve (AUC) of 0.941 (95%CI: 0.893–0.989) in the training cohort, and AUC of 0.925 (95%CI: 0.850–0.998) with accuracy of 86.9%, sensitivity of 83.7%, as well as specificity of 94.4% in the validation set. This study presents a predictive model based on longitudinal CE-MRI data to estimate tumor response to DEB-TACE in HCC patients. By capturing tumor dynamics and integrating radiomics features with deep learning features, the model has the potential to guide individualized treatment strategies and inform clinical decision-making regarding patient management. The online version contains supplementary material available at 10.1186/s40644-025-00926-5.

Development and validation of 3D super-resolution convolutional neural network for <sup>18</sup>F-FDG-PET images.

Endo H, Hirata K, Magota K, Yoshimura T, Katoh C, Kudo K

pubmed logopapersAug 19 2025
Positron emission tomography (PET) is a valuable tool for cancer diagnosis but generally has a lower spatial resolution compared to computed tomography (CT) or magnetic resonance imaging (MRI). High-resolution PET scanners that use silicon photomultipliers and time-of-flight measurements are expensive. Therefore, cost-effective software-based super-resolution methods are required. This study proposes a novel approach for enhancing whole-body PET image resolution applying a 2.5-dimensional Super-Resolution Convolutional Neural Network (2.5D-SRCNN) combined with logarithmic transformation preprocessing. This method aims to improve image quality and maintain quantitative accuracy, particularly for standardized uptake value measurements, while addressing the challenges of providing a memory-efficient alternative to full three-dimensional processing and managing the wide dynamic range of tracer uptake in PET images. We analyzed data from 90 patients who underwent whole-body FDG-PET/CT examinations and reconstructed low-resolution slices with a voxel size of 4 × 4 × 4 mm and corresponding high-resolution (HR) slices with a voxel size of 2 × 2 × 2 mm. The proposed 2.5D-SRCNN model, based on the conventional 2D-SRCNN structure, incorporates information from adjacent slices to generate a high-resolution output. Logarithmic transformation of the voxel values was applied to manage the large dynamic range caused by physiological tracer accumulation in the bladder. Performance was assessed using the peak signal-to-noise ratio (PSNR) and structural similarity index (SSIM). The quantitative accuracy of standardized uptake values (SUV) was validated using a phantom study. The results demonstrated that the 2.5D-SRCNN with logarithmic transformation significantly outperformed the conventional 2D-SRCNN in terms of PSNR and SSIM (<i>p</i> < 0.0001). The proposed method also showed an improved depiction of small spheres in the phantom while maintaining the accuracy of the SUV. Our proposed method for whole-body PET images using a super-resolution model with the 2.5D approach and logarithmic transformation may be effective in generating super-resolution images with a lower spatial error and better quantitative accuracy. The online version contains supplementary material available at 10.1186/s40658-025-00791-y.

TME-guided deep learning predicts chemotherapy and immunotherapy response in gastric cancer with attention-enhanced residual Swin Transformer.

Sang S, Sun Z, Zheng W, Wang W, Islam MT, Chen Y, Yuan Q, Cheng C, Xi S, Han Z, Zhang T, Wu L, Li W, Xie J, Feng W, Chen Y, Xiong W, Yu J, Li G, Li Z, Jiang Y

pubmed logopapersAug 19 2025
Adjuvant chemotherapy and immune checkpoint blockade exert quite durable anti-tumor responses, but the lack of effective biomarkers limits the therapeutic benefits. Utilizing multi-cohorts of 3,095 patients with gastric cancer, we propose an attention-enhanced residual Swin Transformer network to predict chemotherapy response (main task), and two predicting subtasks (ImmunoScore and periostin [POSTN]) are used as intermediate tasks to improve the model's performance. Furthermore, we assess whether the model can identify which patients would benefit from immunotherapy. The deep learning model achieves high accuracy in predicting chemotherapy response and the tumor microenvironment (ImmunoScore and POSTN). We further find that the model can identify which patient may benefit from checkpoint blockade immunotherapy. This approach offers precise chemotherapy and immunotherapy response predictions, opening avenues for personalized treatment options. Prospective studies are warranted to validate its clinical utility.

Automated Protocol Suggestions for Cranial MRI Examinations Using Locally Fine-tuned BERT Models.

Boschenriedter C, Rubbert C, Vach M, Caspers J

pubmed logopapersAug 18 2025
Selection of appropriate imaging sequences protocols for cranial magnetic resonance imaging (MRI) is crucial to address the medical question and adequately support patient care. Inappropriate protocol selection can compromise diagnostic accuracy, extend scan duration, and increase the risk of misdiagnosis. Typically, radiologists determine scanning protocols based on their expertise, a process that can be time-consuming and subject to variability. Language models offer the potential to streamline this process. This study investigates the capability of bidirectional encoder representations from transformers (BERT)-based models to suggest appropriate MRI protocols based on referral information.A total of 410 anonymized electronic referrals for cranial MRI from a local order-entry system were categorized into nine protocol classes by an experienced neuroradiologist. A locally hosted instance of four different, pre-trained BERT-based classifiers (BERT, ModernBERT, GottBERT, and medBERT.de) were trained to classify protocols based on referral entries, including preliminary diagnoses, prior treatment history, and clinical questions. Each model was additionally fine-tuned for local language on a large dataset of electronic referrals.The model based on medBERT.de with local language fine-tuning was the best-performing model and correctly predicted 81% of all protocols, achieving a macro-F1 score of 0.71, macro-precision and macro-recall values of 0.73 and 0.71, respectively. Moreover, we were able to show that local language fine-tuning led to performance improvements across all models.These results demonstrate the potential of language models to predict MRI protocols, even with limited training data. This approach could accelerate and standardize radiological protocol selection, offering significant benefits for clinical workflows.

Machine learning driven diagnostic pathway for clinically significant prostate cancer: the role of micro-ultrasound.

Saitta C, Buffi N, Avolio P, Beatrici E, Paciotti M, Lazzeri M, Fasulo V, Cella L, Garofano G, Piccolini A, Contieri R, Nazzani S, Silvani C, Catanzaro M, Nicolai N, Hurle R, Casale P, Saita A, Lughezzani G

pubmed logopapersAug 18 2025
Detecting clinically significant prostate cancer (csPCa) remains a top priority in delivering high-quality care, yet consensus on an optimal diagnostic pathway is constantly evolving. In this study, we present an innovative diagnostic approach, leveraging a machine learning model tailored to the emerging role of prostate micro-ultrasound (micro-US) in the setting of csPCa diagnosis. We queried our prospective database for patients who underwent Micro-US for a clinical suspicious of prostate cancer. CsPCa was defined as any Gleason group grade > 1. Primary outcome was the development of a diagnostic pathway which implements clinical and radiological findings using machine learning algorithm. The dataset was divided into training (70%) and testing subsets. Boruta algorithms was used for variable selection, then based on the importance coefficients multivariable logistic regression model (MLR) was fitted to predict csPCA. Classification and Regression Tree (CART) model was fitted to create the decision tree. Accuracy of the model was tested using receiver characteristic curve (ROC) analysis using estimated area under the curve (AUC). Overall, 1422 patients were analysed. Multivariable LR revealed PRI-MUS score ≥ 3 (OR 4.37, p < 0.001), PI-RADS score ≥ 3 (OR 2.01, p < 0.001), PSA density ≥ 0.15 (OR 2.44, p < 0.001), DRE (OR 1.93, p < 0.001), anterior lesions (OR 1.49, p = 0.004), prostate cancer familiarity (OR 1.54, p = 0.005) and increasing age (OR 1.031, p < 0.001) as the best predictors for csPCa, demonstrating an AUC in the validation cohort of 83%, 78% sensitivity, 72.1% specificity and 81% negative predictive value. CART analysis revealed elevated PRIMUS score as the main node to stratify our cohort. By integrating clinical features, serum biomarkers, and imaging findings, we have developed a point of care model that accurately predicts the presence of csPCa. Our findings support a paradigm shift towards adopting MicroUS as a first level diagnostic tool for csPCa detection, potentially optimizing clinical decision making. This approach could improve the identification of patients at higher risk for csPca and guide the selection of the most appropriate diagnostic exams. External validation is essential to confirm these results.
Page 8 of 3093083 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.