Sort by:
Page 121 of 1331322 results

Deep learning-based radiomics and machine learning for prognostic assessment in IDH-wildtype glioblastoma after maximal safe surgical resection: a multicenter study.

Liu J, Jiang S, Wu Y, Zou R, Bao Y, Wang N, Tu J, Xiong J, Liu Y, Li Y

pubmed logopapersMay 20 2025
Glioblastoma (GBM) is a highly aggressive brain tumor with poor prognosis. This study aimed to construct and validate a radiomics-based machine learning model for predicting overall survival (OS) in IDH-wildtype GBM after maximal safe surgical resection using magnetic resonance imaging. A total of 582 patients were retrospectively enrolled, comprising 301 in the training cohort, 128 in the internal validation cohort, and 153 in the external validation cohort. Volumes of interest (VOIs) from contrast-enhanced T1-weighted imaging (CE-T1WI) were segmented into three regions: contrast-enhancing tumor, necrotic non-enhancing core, and peritumoral edema using an ResNet-based segmentation network. A total of 4,227 radiomic features were extracted and filtered using LASSO-Cox regression to identify signatures. The prognostic model was constructed using the Mime prediction framework, categorizing patients into high- and low-risk groups based on the median OS. Model performance was assessed using the concordance index (CI) and Kaplan-Meier survival analysis. Independent prognostic factors were identified through multivariable Cox regression analysis, and a nomogram was developed for individualized risk assessment. The Step Cox [backward] + RSF model achieved CIs of 0.89, 0.81, and 0.76 in the training, internal and external validation cohorts. Log-rank tests demonstrated significant survival differences between high- and low-risk groups across all cohorts (P < 0.05). Multivariate Cox analysis identified age (HR: 1.022; 95% CI: 0.979, 1.009, P < 0.05), KPS score (HR: 0.970, 95% CI: 0.960, 0.978, P < 0.05), rad-scores of the necrotic non-enhancing core (HR: 8.164; 95% CI: 2.439, 27.331, P < 0.05), and peritumoral edema (HR: 3.748; 95% CI: 1.212, 11.594, P < 0.05) as independent predictors of OS. A nomogram integrating these predictors provided individualized risk assessment. This deep learning segmentation-based radiomics model demonstrated robust performance in predicting OS in GBM after maximal safe surgical resection. By incorporating radiomic signatures and advanced machine learning algorithms, it offers a non-invasive tool for personalized prognostic assessment and supports clinical decision-making.

TransMedSeg: A Transferable Semantic Framework for Semi-Supervised Medical Image Segmentation

Mengzhu Wang, Jiao Li, Shanshan Wang, Long Lan, Huibin Tan, Liang Yang, Guoli Yang

arxiv logopreprintMay 20 2025
Semi-supervised learning (SSL) has achieved significant progress in medical image segmentation (SSMIS) through effective utilization of limited labeled data. While current SSL methods for medical images predominantly rely on consistency regularization and pseudo-labeling, they often overlook transferable semantic relationships across different clinical domains and imaging modalities. To address this, we propose TransMedSeg, a novel transferable semantic framework for semi-supervised medical image segmentation. Our approach introduces a Transferable Semantic Augmentation (TSA) module, which implicitly enhances feature representations by aligning domain-invariant semantics through cross-domain distribution matching and intra-domain structural preservation. Specifically, TransMedSeg constructs a unified feature space where teacher network features are adaptively augmented towards student network semantics via a lightweight memory module, enabling implicit semantic transformation without explicit data generation. Interestingly, this augmentation is implicitly realized through an expected transferable cross-entropy loss computed over the augmented teacher distribution. An upper bound of the expected loss is theoretically derived and minimized during training, incurring negligible computational overhead. Extensive experiments on medical image datasets demonstrate that TransMedSeg outperforms existing semi-supervised methods, establishing a new direction for transferable representation learning in medical image analysis.

Detection of maxillary sinus pathologies using deep learning algorithms.

Aktuna Belgin C, Kurbanova A, Aksoy S, Akkaya N, Orhan K

pubmed logopapersMay 20 2025
Deep learning, a subset of machine learning, is widely utilized in medical applications. Identifying maxillary sinus pathologies before surgical interventions is crucial for ensuring successful treatment outcomes. Cone beam computed tomography (CBCT) is commonly employed for maxillary sinus evaluations due to its high resolution and lower radiation exposure. This study aims to assess the accuracy of artificial intelligence (AI) algorithms in detecting maxillary sinus pathologies from CBCT scans. A dataset comprising 1000 maxillary sinuses (MS) from 500 patients was analyzed using CBCT. Sinuses were categorized based on the presence or absence of pathology, followed by segmentation of the maxillary sinus. Manual segmentation masks were generated using the semiautomatic software ITK-SNAP, which served as a reference for comparison. A convolutional neural network (CNN)-based machine learning model was then implemented to automatically segment maxillary sinus pathologies from CBCT images. To evaluate segmentation accuracy, metrics such as the Dice similarity coefficient (DSC) and intersection over union (IoU) were utilized by comparing AI-generated results with human-generated segmentations. The automated segmentation model achieved a Dice score of 0.923, a recall of 0.979, an IoU of 0.887, an F1 score of 0.970, and a precision of 0.963. This study successfully developed an AI-driven approach for segmenting maxillary sinus pathologies in CBCT images. The findings highlight the potential of this method for rapid and accurate clinical assessment of maxillary sinus conditions using CBCT imaging.

Diagnostic value of fully automated CT pulmonary angiography in patients with chronic thromboembolic pulmonary hypertension and chronic thromboembolic disease.

Lin Y, Li M, Xie S

pubmed logopapersMay 20 2025
To evaluate the value of employing artificial intelligence (AI)-assisted CT pulmonary angiography (CTPA) for patients with chronic thromboembolic pulmonary hypertension (CTEPH) and chronic thromboembolic disease (CTED). A single-center, retrospective analysis of 350 sequential patients with right heart catheterization (RHC)-confirmed CTEPH, CTED, and normal controls was conducted. Parameters such as the main pulmonary artery diameter (MPAd), the ratio of MPA to ascending aorta diameter (MPAd/AAd), the ratio of right to left ventricle diameter (RVd/LVd), and the ratio of RV to LV volume (RVv/LVv) were evaluated using automated AI software and compared with manual analysis. The reliability was assessed through an intraclass correlation coefficient (ICC) analysis. The diagnostic accuracy was determined using receiver-operating characteristic (ROC) curves. Compared to CTED and control groups, CTEPH patients were significantly more likely to have elevated automatic CTPA metrics (all p < 0.001, respectively). Automated MPAd, MPAd/Aad, and RVv/LVv had a strong correlation with mPAP (r = 0.952, 0.904, and 0.815, respectively, all p < 0.001). The automated and manual CTPA analyses showed strong concordance. For the CTEPH and CTED categories, the optimal area under the curve (AU-ROC) reached 0.939 (CI: 0.908-0.969). In the CTEPH and control groups, the best AU-ROC was 0.970 (CI: 0.953-0.988). In the CTED and control groups, the best AU-ROC was 0.782 (CI: 0.724-0.840). Automated AI-driven CTPA analysis provides a dependable approach for evaluating patients with CTEPH, CTED, and normal controls, demonstrating excellent consistency and efficiency. Question Guidelines do not advocate for applying treatment protocols for CTEPH to patients with CTED; early detection of the condition is crucial. Findings Automated CTPA analysis was feasible in 100% of patients with good agreement and would have added information for early detection and identification. Clinical relevance Automated AI-driven CTPA analysis provides a reliable approach demonstrating excellent consistency and efficiency. Additionally, these noninvasive imaging findings may aid in treatment stratification and determining optimal intervention directed by RHC.

End-to-end Cortical Surface Reconstruction from Clinical Magnetic Resonance Images

Jesper Duemose Nielsen, Karthik Gopinath, Andrew Hoopes, Adrian Dalca, Colin Magdamo, Steven Arnold, Sudeshna Das, Axel Thielscher, Juan Eugenio Iglesias, Oula Puonti

arxiv logopreprintMay 20 2025
Surface-based cortical analysis is valuable for a variety of neuroimaging tasks, such as spatial normalization, parcellation, and gray matter (GM) thickness estimation. However, most tools for estimating cortical surfaces work exclusively on scans with at least 1 mm isotropic resolution and are tuned to a specific magnetic resonance (MR) contrast, often T1-weighted (T1w). This precludes application using most clinical MR scans, which are very heterogeneous in terms of contrast and resolution. Here, we use synthetic domain-randomized data to train the first neural network for explicit estimation of cortical surfaces from scans of any contrast and resolution, without retraining. Our method deforms a template mesh to the white matter (WM) surface, which guarantees topological correctness. This mesh is further deformed to estimate the GM surface. We compare our method to recon-all-clinical (RAC), an implicit surface reconstruction method which is currently the only other tool capable of processing heterogeneous clinical MR scans, on ADNI and a large clinical dataset (n=1,332). We show a approximately 50 % reduction in cortical thickness error (from 0.50 to 0.24 mm) with respect to RAC and better recovery of the aging-related cortical thinning patterns detected by FreeSurfer on high-resolution T1w scans. Our method enables fast and accurate surface reconstruction of clinical scans, allowing studies (1) with sample sizes far beyond what is feasible in a research setting, and (2) of clinical populations that are difficult to enroll in research studies. The code is publicly available at https://github.com/simnibs/brainnet.

"DCSLK: Combined Large Kernel Shared Convolutional Model with Dynamic Channel Sampling".

Li Z, Luo S, Li H, Li Y

pubmed logopapersMay 20 2025
This study centers around the competition between Convolutional Neural Networks (CNNs) with large convolutional kernels and Vision Transformers in the domain of computer vision, delving deeply into the issues pertaining to parameters and computational complexity that stem from the utilization of large convolutional kernels. Even though the size of the convolutional kernels has been extended up to 51×51, the enhancement of performance has hit a plateau, and moreover, striped convolution incurs a performance degradation. Enlightened by the hierarchical visual processing mechanism inherent in humans, this research innovatively incorporates a shared parameter mechanism for large convolutional kernels. It synergizes the expansion of the receptive field enabled by large convolutional kernels with the extraction of fine-grained features facilitated by small convolutional kernels. To address the surging number of parameters, a meticulously designed parameter sharing mechanism is employed, featuring fine-grained processing in the central region of the convolutional kernel and wide-ranging parameter sharing in the periphery. This not only curtails the parameter count and mitigates the model complexity but also sustains the model's capacity to capture extensive spatial relationships. Additionally, in light of the problems of spatial feature information loss and augmented memory access during the 1×1 convolutional channel compression phase, this study further puts forward a dynamic channel sampling approach, which markedly elevates the accuracy of tumor subregion segmentation. To authenticate the efficacy of the proposed methodology, a comprehensive evaluation has been conducted on three brain tumor segmentation datasets, namely BraTs2020, BraTs2024, and Medical Segmentation Decathlon Brain 2018. The experimental results evince that the proposed model surpasses the current mainstream ConvNet and Transformer architectures across all performance metrics, proffering novel research perspectives and technical stratagems for the realm of medical image segmentation.

Pancreas segmentation in CT scans: A novel MOMUNet based workflow.

Juwita J, Hassan GM, Datta A

pubmed logopapersMay 20 2025
Automatic pancreas segmentation in CT scans is crucial for various medical applications, including early diagnosis and computer-assisted surgery. However, existing segmentation methods remain suboptimal due to significant pancreas size variations across slices and severe class imbalance caused by the pancreas's small size and CT scanner movement during imaging. Traditional computer vision techniques struggle with these challenges, while deep learning-based approaches, despite their success in other domains, still face limitations in pancreas segmentation. To address these issues, we propose a novel, three-stage workflow that enhances segmentation accuracy and computational efficiency. First, we introduce External Contour Cropping (ECC), a background cleansing technique that mitigates class imbalance. Second, we propose a Size Ratio (SR) technique that restructures the training dataset based on the relative size of the target organ, improving the robustness of the model against anatomical variations. Third, we develop MOMUNet, an ultra-lightweight segmentation model with only 1.31 million parameters, designed for optimal performance on limited computational resources. Our proposed workflow achieves an improvement in Dice Score (DSC) of 2.56% over state-of-the-art (SOTA) models in the NIH-Pancreas dataset and 2.97% in the MSD-Pancreas dataset. Furthermore, applying the proposed model to another small organ, such as colon cancer segmentation in the MSD-Colon dataset, yielded a DSC of 68.4%, surpassing the SOTA models. These results demonstrate the effectiveness of our approach in significantly improving segmentation accuracy for small abdomen organs including pancreas and colon, making deep learning more accessible for low-resource medical facilities.

CT-guided CBCT Multi-Organ Segmentation Using a Multi-Channel Conditional Consistency Diffusion Model for Lung Cancer Radiotherapy.

Chen X, Qiu RLJ, Pan S, Shelton J, Yang X, Kesarwala AH

pubmed logopapersMay 20 2025
In cone beam computed tomography(CBCT)-guided adaptive radiotherapy, rapid and precise segmentation of organs-at-risk(OARs)is essential for accurate dose verification and online replanning. The quality of CBCT images obtained with current onboard CBCT imagers and clinical imaging protocols, however, is often compromised by artifacts such as scatter and motion, particularly for thoracic CBCTs. These artifacts not only degrade image contrast but also obscure anatomical boundaries, making accurate segmentation on CBCT images significantly more challenging compared to planning CT images. To address these persistent challenges, we propose a novel multi-channel conditional consistency diffusion model(MCCDM)for segmentation of OARs in thoracic CBCT images (CBCT-MCCDM), which harnesses its domain transfer capabilities to improve segmentation accuracy across different imaging modalities. By jointly training the MCCDM with CT images and their corresponding masks, our framework enables an end-to-end mapping learning process that generates accurate segmentation of OARs.&#xD;This CBCT-MCCDM was used to delineate esophagus, heart, the left and right lungs, and spinal cord on CBCT images from each patient with lung cancer. We quantitatively evaluated our approach by comparing model-generated contours with ground truth contours from 33 patients with lung cancer treated with 5-fraction stereotactic body radiation therapy (SBRT), demonstrating its potential to enhance segmentation accuracy despite the presence of challenging CBCT artifacts. The proposed method was evaluated using average Dice similarity coefficients (DSC), sensitivity, specificity, 95th Percentile Hausdorff Distance (HD95), and mean surface distance (MSD) for each of the five OARs. The method achieved average DSC values of 0.82, 0.88, 0.95, 0.96, and 0.96 for the esophagus, heart, left lung, right lung, and spinal cord, respectively. Sensitivity values were 0.813, 0.922, 0.956, 0.958, and 0.929, respectively, while specificity values were 0.991, 0.994, 0.996, 0.996, and 0.995, respectively. We compared the proposed method with two state-of-art methods, CBCT-only method and U-Net, and demonstrated that the proposed CBCT-MCCDM.

CONSIGN: Conformal Segmentation Informed by Spatial Groupings via Decomposition

Bruno Viti, Elias Karabelas, Martin Holler

arxiv logopreprintMay 20 2025
Most machine learning-based image segmentation models produce pixel-wise confidence scores - typically derived from softmax outputs - that represent the model's predicted probability for each class label at every pixel. While this information can be particularly valuable in high-stakes domains such as medical imaging, these (uncalibrated) scores are heuristic in nature and do not constitute rigorous quantitative uncertainty estimates. Conformal prediction (CP) provides a principled framework for transforming heuristic confidence scores into statistically valid uncertainty estimates. However, applying CP directly to image segmentation ignores the spatial correlations between pixels, a fundamental characteristic of image data. This can result in overly conservative and less interpretable uncertainty estimates. To address this, we propose CONSIGN (Conformal Segmentation Informed by Spatial Groupings via Decomposition), a CP-based method that incorporates spatial correlations to improve uncertainty quantification in image segmentation. Our method generates meaningful prediction sets that come with user-specified, high-probability error guarantees. It is compatible with any pre-trained segmentation model capable of generating multiple sample outputs - such as those using dropout, Bayesian modeling, or ensembles. We evaluate CONSIGN against a standard pixel-wise CP approach across three medical imaging datasets and two COCO dataset subsets, using three different pre-trained segmentation models. Results demonstrate that accounting for spatial structure significantly improves performance across multiple metrics and enhances the quality of uncertainty estimates.

Challenges in Using Deep Neural Networks Across Multiple Readers in Delineating Prostate Gland Anatomy.

Abudalou S, Choi J, Gage K, Pow-Sang J, Yilmaz Y, Balagurunathan Y

pubmed logopapersMay 20 2025
Deep learning methods provide enormous promise in automating manually intense tasks such as medical image segmentation and provide workflow assistance to clinical experts. Deep neural networks (DNN) require a significant amount of training examples and a variety of expert opinions to capture the nuances and the context, a challenging proposition in oncological studies (H. Wang et al., Nature, vol. 620, no. 7972, pp. 47-60, Aug 2023). Inter-reader variability among clinical experts is a real-world problem that severely impacts the generalization of DNN reproducibility. This study proposes quantifying the variability in DNN performance using expert opinions and exploring strategies to train the network and adapt between expert opinions. We address the inter-reader variability problem in the context of prostate gland segmentation using a well-studied DNN, the 3D U-Net model. Reference data includes magnetic resonance imaging (MRI, T2-weighted) with prostate glandular anatomy annotations from two expert readers (R#1, n = 342 and R#2, n = 204). 3D U-Net was trained and tested with individual expert examples (R#1 and R#2) and had an average Dice coefficient of 0.825 (CI, [0.81 0.84]) and 0.85 (CI, [0.82 0.88]), respectively. Combined training with a representative cohort proportion (R#1, n = 100 and R#2, n = 150) yielded enhanced model reproducibility across readers, achieving an average test Dice coefficient of 0.863 (CI, [0.85 0.87]) for R#1 and 0.869 (CI, [0.87 0.88]) for R#2. We re-evaluated the model performance across the gland volumes (large, small) and found improved performance for large gland size with an average Dice coefficient to be at 0.846 [CI, 0.82 0.87] and 0.872 [CI, 0.86 0.89] for R#1 and R#2, respectively, estimated using fivefold cross-validation. Performance for small gland sizes diminished with average Dice of 0.8 [0.79, 0.82] and 0.8 [0.79, 0.83] for R#1 and R#2, respectively.
Page 121 of 1331322 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.