Sort by:
Page 15 of 1411403 results

A multi-class segmentation model of deep learning on contrast-enhanced computed tomography to segment and differentiate lipid-poor adrenal nodules: a dual-center study.

Bai X, Wu Z, Lu L, Zhang H, Zheng H, Zhang Y, Liu X, Zhang Z, Zhang G, Zhang D, Jin Z, Sun H

pubmed logopapersSep 22 2025
To develop a deep-learning model for segmenting and classifying adrenal nodules as either lipid-poor adenoma (LPA) or nodular hyperplasia (NH) on contrast-enhanced computed tomography (CECT) images. This retrospective dual-center study included 164 patients (median age 51.0 years; 93 females) with pathologically confirmed LPA or NH. The model was trained on 128 patients from the internal center and validated on 36 external cases. Radiologists annotated adrenal glands and nodules on 1-mm portal-venous phase CT images. We proposed Mamba-USeg, a novel state-space models (SSMs)-based multi-class segmentation method that performs simultaneous segmentation and classification. Performance was evaluated using the mean Dice similarity coefficient (mDSC) for segmentation and sensitivity/specificity for classification, with comparisons made against MultiResUNet and CPFNet. From per-slice segmentation, the model yielded an mDSC of 0.855 for the adrenal gland; for nodule segmentation, it achieved mDSCs of 0.869 (LPA) and 0.863 (NH), significantly outperforming two previous models-MultiResUNet (LPA, p < 0.001; NH, p = 0.014) and CPFNet (LPA, p = 0.003; NH, p = 0.023). Classification performance from per slice demonstrated sensitivity of 95.3% (95% confidence interval [CI] 91.3-96.6%) and specificity of 92.7% (95% CI: 91.9-93.6%) for LPA, and sensitivity of 94.2% (95% CI: 89.7-97.7%) and specificity of 91.5% (95% CI: 90.4-92.4%) for NH. The classification accuracy for patients from external sources was 91.7% (95% CI: 76.8-98.9%). The proposed multi-class segmentation model can accurately segment and differentiate between LPA and NH on CECT images, demonstrating superior performance to existing methods. Question Accurate differentiation between LPA and NH on imaging remains clinically challenging yet critically important for guiding appropriate treatment approaches. Findings Mamba-Useg, a multi-class segmentation model utilizing pixel-level analysis and majority voting strategies, can accurately segment and classify adrenal nodules as LPA or NH. Clinical relevance The proposed multi-class segmentation model can simultaneously segment and classify adrenal nodules, outperforming previous models in accuracy; it significantly aids clinical decision-making and thereby reduces unnecessary surgeries in adrenal hyperplasia patients.

Deep-learning-based prediction of significant portal hypertension with single cross-sectional non-enhanced CT.

Yamamoto A, Sato S, Ueda D, Walston SL, Kageyama K, Jogo A, Nakano M, Kotani K, Uchida-Kobayashi S, Kawada N, Miki Y

pubmed logopapersSep 22 2025
The purpose of this study was to establish a predictive deep learning (DL) model for clinically significant portal hypertension (CSPH) based on a single cross-sectional non-contrast CT image and to compare four representative positional images to determine the most suitable for the detection of CSPH. The study included 421 patients with chronic liver disease who underwent hepatic venous pressure gradient measurement at our institution between May 2007 and January 2024. Patients were randomly classified into training, validation, and test datasets at a ratio of 8:1:1. Non-contrast cross-sectional CT images from four target areas of interest were used to create four deep-learning-based models for predicting CSPH. The areas of interest were the umbilical portion of the portal vein (PV), the first right branch of the PV, the confluence of the splenic vein and PV, and the maximum cross-section of the spleen. The models were implemented using convolutional neural networks with a multilayer perceptron as the classifier. The model with the best predictive ability for CSPH was then compared to 13 conventional evaluation methods. Among the four areas, the umbilical portion of the PV had the highest predictive ability for CSPH (area under the curve [AUC]: 0.80). At the threshold maximizing the Youden index, sensitivity and specificity were 0.867 and 0.615, respectively. This DL model outperformed the ANTICIPATE model. We developed an algorithm that can predict CSPH immediately from a single slice of non-contrast CT, using the most suitable image of the umbilical portion of the PV. Question CSPH predicts complications but requires invasive hepatic venous pressure gradient measurement for diagnosis. Findings At the threshold maximizing the Youden index, sensitivity and specificity were 0.867 and 0.615, respectively. This DL model outperformed the ANTICIPATE model. Clinical relevance This study shows that a DL model can accurately predict CSPH from a single non-contrast CT image, providing a non-invasive alternative to invasive methods and aiding early detection and risk stratification in chronic liver disease without image manipulation.

Neural Network-Driven Direct CBCT-Based Dose Calculation for Head-and-Neck Proton Treatment Planning

Muheng Li, Evangelia Choulilitsa, Lisa Fankhauser, Francesca Albertini, Antony Lomax, Ye Zhang

arxiv logopreprintSep 22 2025
Accurate dose calculation on cone beam computed tomography (CBCT) images is essential for modern proton treatment planning workflows, particularly when accounting for inter-fractional anatomical changes in adaptive treatment scenarios. Traditional CBCT-based dose calculation suffers from image quality limitations, requiring complex correction workflows. This study develops and validates a deep learning approach for direct proton dose calculation from CBCT images using extended Long Short-Term Memory (xLSTM) neural networks. A retrospective dataset of 40 head-and-neck cancer patients with paired planning CT and treatment CBCT images was used to train an xLSTM-based neural network (CBCT-NN). The architecture incorporates energy token encoding and beam's-eye-view sequence modelling to capture spatial dependencies in proton dose deposition patterns. Training utilized 82,500 paired beam configurations with Monte Carlo-generated ground truth doses. Validation was performed on 5 independent patients using gamma analysis, mean percentage dose error assessment, and dose-volume histogram comparison. The CBCT-NN achieved gamma pass rates of 95.1 $\pm$ 2.7% using 2mm/2% criteria. Mean percentage dose errors were 2.6 $\pm$ 1.4% in high-dose regions ($>$90% of max dose) and 5.9 $\pm$ 1.9% globally. Dose-volume histogram analysis showed excellent preservation of target coverage metrics (Clinical Target Volume V95% difference: -0.6 $\pm$ 1.1%) and organ-at-risk constraints (parotid mean dose difference: -0.5 $\pm$ 1.5%). Computation time is under 3 minutes without sacrificing Monte Carlo-level accuracy. This study demonstrates the proof-of-principle of direct CBCT-based proton dose calculation using xLSTM neural networks. The approach eliminates traditional correction workflows while achieving comparable accuracy and computational efficiency suitable for adaptive protocols.

Development of a patient-specific cone-beam computed tomography dose optimization model using machine learning in image-guided radiation therapy.

Miura S

pubmed logopapersSep 22 2025
Cone-beam computed tomography (CBCT) is commonly utilized in radiation therapy to visualize soft tissues and bone structures. This study aims to develop a machine learning model that predicts optimal, patient-specific CBCT doses that minimize radiation exposure while maintaining soft tissue image quality in prostate radiation therapy. Phantom studies evaluated the relationship between dose and two image quality metrics: image standard deviation (SD) and contrast-to-noise ratio (CNR). In a prostate-simulating phantom, CNR did not significantly decrease at doses above 40% compared to the 100% dose. Based on low-contrast resolution, this value was selected as the minimum clinical dose level. In clinical image analysis, both SD and CNR degraded with decreasing dose, consistent with the phantom findings. The structural similarity index between CBCT and planning computed tomography (CT) significantly decreased at doses below 60%, with a mean value of 0.69 at 40%. Previous studies suggest that this level may correspond to acceptable registration accuracy within the typical planning target volume margins applied in image-guided radiotherapy. A machine learning model was developed to predict CBCT doses using patient-specific metrics from planning CT scans and CBCT image quality parameters. Among the tested models, support vector regression achieved the highest accuracy, with an R<sup>2</sup> value of 0.833 and a root mean squared error of 0.0876, and was therefore adopted for dose prediction. These results support the feasibility of patient-specific CBCT imaging protocols that reduce radiation dose while maintaining clinically acceptable image quality for soft tissue registration.

Learning Contrastive Multimodal Fusion with Improved Modality Dropout for Disease Detection and Prediction

Yi Gu, Kuniaki Saito, Jiaxin Ma

arxiv logopreprintSep 22 2025
As medical diagnoses increasingly leverage multimodal data, machine learning models are expected to effectively fuse heterogeneous information while remaining robust to missing modalities. In this work, we propose a novel multimodal learning framework that integrates enhanced modalities dropout and contrastive learning to address real-world limitations such as modality imbalance and missingness. Our approach introduces learnable modality tokens for improving missingness-aware fusion of modalities and augments conventional unimodal contrastive objectives with fused multimodal representations. We validate our framework on large-scale clinical datasets for disease detection and prediction tasks, encompassing both visual and tabular modalities. Experimental results demonstrate that our method achieves state-of-the-art performance, particularly in challenging and practical scenarios where only a single modality is available. Furthermore, we show its adaptability through successful integration with a recent CT foundation model. Our findings highlight the effectiveness, efficiency, and generalizability of our approach for multimodal learning, offering a scalable, low-cost solution with significant potential for real-world clinical applications. The code is available at https://github.com/omron-sinicx/medical-modality-dropout.

Comprehensive Assessment of Tumor Stromal Heterogeneity in Bladder Cancer by Deep Learning and Habitat Radiomics.

Du Y, Sui Y, Tao Y, Cao J, Jiang X, Yu J, Wang B, Wang Y, Li H

pubmed logopapersSep 22 2025
Tumor stromal heterogeneity plays a pivotal role in bladder cancer progression. The tumor-stroma ratio (TSR) is a key pathological marker reflecting stromal heterogeneity. This study aimed to develop a preoperative, CT-based machine learning model for predicting TSR in bladder cancer, comparing various radiomic approaches, and evaluating their utility in prognostic assessment and immunotherapy response prediction. A total of 477 bladder urothelial carcinoma patients from two centers were retrospectively included. Tumors were segmented on preoperative contrast-enhanced CT, and radiomic features were extracted. K-means clustering was used to divide tumors into subregions. Radiomics models were constructed: a conventional model (Intra), a multi-subregion model (Habitat), and single-subregion models (HabitatH1/H2/H3). A deep transfer learning model (DeepL) based on the largest tumor cross-section was also developed. Model performance was evaluated in training, testing, and external validation cohorts, and associations with recurrence-free survival, CD8+ T cell infiltration, and immunotherapy response were analyzed. The HabitatH1 model demonstrated robust diagnostic performance with favorable calibration and clinical utility. The DeepL model surpassed all radiomics models in predictive accuracy. A nomogram combining DeepL and clinical variables effectively predicted recurrence-free survival, CD8+ T cell infiltration, and immunotherapy response. Imaging-predicted TSR showed significant associations with the tumor immune microenvironment and treatment outcomes. CT-based habitat radiomics and deep learning models enable non-invasive, quantitative assessment of TSR in bladder cancer. The DeepL model provides superior diagnostic and prognostic value, supporting personalized treatment decisions and prediction of immunotherapy response.

Artificial Intelligence-Assisted Treatment Planning in an Interdisciplinary Rehabilitation in the Esthetic Zone.

Fonseca FJPO, Matias BBR, Pacheco P, Muraoka CSAS, Silva EVF, Sesma N

pubmed logopapersSep 22 2025
This case report elucidates the application of an integrated digital workflow in which diagnosis, planning, and execution were enhanced by artificial intelligence (AI), enabling an assertive interdisciplinary esthetic-functional rehabilitation. With AI-powered software, the sequence from orthodontic treatment to the final rehabilitation achieved high predictability, addressing patient's chief complaints. A patient presented with a missing maxillary left central incisor (tooth 11) and dissatisfaction with a removable partial denture. Clinical examination revealed a gummy smile, a deviated midline, and a disproportionate mesiodistal space relative to the midline. Initial documentation included photographs, intraoral scanning, and cone-beam computed tomography of the maxilla. These data were integrated into a digital planning software to create an interdisciplinary plan. This workflow included prosthetically guided orthodontic treatment with aligners, a motivational mockup, guided implant surgery, peri-implant soft tissue management, and final prosthetic rehabilitation using a CAD/CAM approach. This digital workflow enhanced communication among the multidisciplinary team and with the patient, ensuring highly predictable esthetic and functional outcomes. Comprehensive digital workflows improve diagnostic accuracy, streamline planning with AI, and facilitate patient understanding. This approach increases patient satisfaction, supports interdisciplinary collaboration, and promotes treatment adherence.

Volume Fusion-based Self-Supervised Pretraining for 3D Medical Image Segmentation.

Wang G, Fu J, Wu J, Luo X, Zhou Y, Liu X, Li K, Lin J, Shen B, Zhang S

pubmed logopapersSep 22 2025
The performance of deep learning models for medical image segmentation is often limited in scenarios where training data or annotations are limited. Self-Supervised Learning (SSL) is an appealing solution for this dilemma due to its feature learning ability from a large amount of unannotated images. Existing SSL methods have focused on pretraining either an encoder for global feature representation or an encoder-decoder structure for image restoration, where the gap between pretext and downstream tasks limits the usefulness of pretrained decoders in downstream segmentation. In this work, we propose a novel SSL strategy named Volume Fusion (VolF) for pretraining 3D segmentation models. It minimizes the gap between pretext and downstream tasks by introducing a pseudo-segmentation pretext task, where two sub-volumes are fused by a discretized block-wise fusion coefficient map. The model takes the fused result as input and predicts the category of fusion coefficient for each voxel, which can be trained with standard supervised segmentation loss functions without manual annotations. Experiments with an abdominal CT dataset for pretraining and both in-domain and out-domain downstream datasets showed that VolF led to large performance gain from training from scratch with faster convergence speed, and outperformed several state-of-the-art SSL methods. In addition, it is general to different network structures, and the learned features have high generalizability to different body parts and modalities.

An attention aided wavelet convolutional neural network for lung nodule characterization.

Halder A

pubmed logopapersSep 21 2025
Lung cancer is a leading cause of cancer-related mortality worldwide, necessitating the development of accurate and efficient diagnostic methods. Early detection and accurate characterization of pulmonary nodules significantly influence patient prognosis and treatment planning and can improve the five-year survival rate. However, distinguishing benign from malignant nodules using conventional imaging techniques remain a clinical challenge due to subtle structural similarities. Therefore, to address this issue, this study proposes a novel two-pathway wavelet-based deep learning computer-aided diagnosis (CADx) framework forimproved lung nodule classification using high-resolution computed tomography (HRCT) images. The proposed Wavelet-based Lung Cancer Detection Network (WaveLCDNet) is capable of characterizing lung nodules images through a hierarchical feature extraction pipeline consisting of convolutional neural network (CNN) blocks and trainable wavelet blocks for multi-resolution analysis. The introduced wavelet block can capture both spatial and frequency-domain information, preserving fine-grained texture details essential for nodule characterization. Additionally, in this work, convolutional block attention module (CBAM) based attention mechanism has been introduced to enhance discriminative feature learning. The extracted features from both pathways are adaptively fused and processed using global average pooling (GAP) operation. The introduced WaveLCDNet is trained and evaluated on the publicly accessible LIDC-IDRI dataset and achieved sensitivity, specificity, accuracy of 96.89%, 95.52%, and 96.70% for nodule characterization. In addition, the developed framework was externally validated on the Kaggle DSB2017 test dataset, achieving 95.90% accuracy with a Brier Score of 0.0215 for lung nodule characterization, reinforcing its reliability across independent imaging sources and its practical value for integration into real-world diagnostic workflows. By effectively combining multi-scale convolutional filtering with wavelet-based multi-resolution analysisand attention mechanisms, the introduced framework outperforms different recent most state-of-the-art deep learning models and offers a promising CADx solution forenhancing lung cancer screening early diagnosis in clinical settings.

Chest computed tomography-based artificial intelligence-aided latent class analysis for diagnosis of severe pneumonia.

Chu C, Guo Y, Lu Z, Gui T, Zhao S, Cui X, Lu S, Jiang M, Li W, Gao C

pubmed logopapersSep 20 2025
There is little literature describing the artificial intelligence (AI)-aided diagnosis of severe pneumonia (SP) subphenotypes and the association of the subphenotypes with the ventilatory treatment efficacy. The aim of our study is to illustrate whether clinical and biological heterogeneity, such as ventilation and gas-exchange, exists among patients with SP using chest computed tomography (CT)-based AI-aided latent class analysis (LCA). This retrospective study included 413 patients hospitalized at Xinhua Hospital diagnosed with SP from June 1, 2015 to May 30, 2020. AI quantification results of chest CT and their combination with additional clinical variables were used to develop LCA models in an SP population. The optimal subphenotypes were determined though evaluating statistical indicators of all the LCA models, and clinical implications of them such as guiding ventilation strategies were further explored by statistical methods. The two-class LCA model based on AI quantification results of chest CT can describe the biological characteristics of the SP population well and hence yielded the two clinical subphenotypes. Patients with subphenotype-1 had milder infections ( P <0.001) than patients with subphenotype-2 and had lower 30-day ( P <0.001) and 90-day ( P <0.001) mortality, and lower in-hospital ( P = 0.001) and 2-year ( P <0.001) mortality. Patients with subphenotype-1 showed a better match between the percentage of non-infected lung volume (used to quantify ventilation) and oxygen saturation (used to reflect gas exchange), compared with patients with subphenotype-2. There were significant differences in the matching degree of lung ventilation and gas exchange between the two subphenotypes ( P <0.001). Compared with patients with subphenotype-2, those with subphenotype-1 showed a relatively better match between CT-based AI metrics of the non-infected region and oxygenation, and their clinical outcomes were effectively improved after receiving invasive ventilation treatment. A two-class LCA model based on AI quantification results of chest CT in the SP population particularly revealed clinical heterogeneity of lung function. Identifying the degree of match between ventilation and gas-exchange may help guide decisions about assisted ventilation.
Page 15 of 1411403 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.