Sort by:
Page 10 of 99986 results

Hybrid-MedNet: a hybrid CNN-transformer network with multi-dimensional feature fusion for medical image segmentation.

Memon Y, Zeng F

pubmed logopapersSep 19 2025
Twin-to-Twin Transfusion Syndrome (TTTS) is a complex prenatal condition in which monochorionic twins experience an imbalance in blood flow due to abnormal vascular connections in the shared placenta. Fetoscopic Laser Photocoagulation (FLP) is the first-line treatment for TTTS, aimed at coagulating these abnormal connections. However, the procedure is complicated by a limited field of view, occlusions, poor-quality endoscopic images, and distortions caused by artifacts. To optimize the visualization of placental vessels during surgical procedures, we propose Hybrid-MedNet, a novel hybrid CNN-transformer network that incorporates multi-dimensional deep feature learning techniques. The network introduces a BiPath Tokenization module that enhances vessel boundary detection by capturing both channel dependencies and spatial features through parallel attention mechanisms. A Context-Aware Transformer block addresses the weak inductive bias problem in traditional transformers while preserving spatial relationships crucial for accurate vessel identification in distorted fetoscopic images. Furthermore, we develop a Multi-Scale Trifusion Module that integrates multi-dimensional features to capture rich vascular representations from the encoder and facilitate precise vessel information transfer to the decoder for improved segmentation accuracy. Experimental results show that our approach achieves a Dice score of 95.40% on fetoscopic images, outperforming 10 state-of-the-art segmentation methods. The consistent superior performance across four segmentation tasks and ten distinct datasets confirms the robustness and effectiveness of our method for diverse and complex medical imaging applications.

Insertion of hepatic lesions into clinical photon-counting-detector CT projection data.

Gong H, Kharat S, Wellinghoff J, El Sadaney AO, Fletcher JG, Chang S, Yu L, Leng S, McCollough CH

pubmed logopapersSep 19 2025
To facilitate task-driven image quality assessment of lesion detectability in clinical photon-counting-detector CT (PCD-CT), it is desired to have patient image data with known pathology and precise annotation. Standard patient case collection and reference standard establishment are time- and resource-intensive. To mitigate this challenge, we aimed to develop a projection-domain lesion insertion framework that efficiently creates realistic patient cases by digitally inserting real radiopathologic features into patient PCD-CT images. 
Approach. This framework used an artificial-intelligence-assisted (AI) semi-automatic annotation to generate digital lesion models from real lesion images. The x-ray energy for commercial beam-hardening correction in PCD-CT system was estimated and used for calculating multi-energy forward projections of these lesion models at different energy thresholds. Lesion projections were subsequently added to patient projections from PCD-CT exams. The modified projections were reconstructed to form realistic lesion-present patient images, using the CT manufacturer's offline reconstruction software. Image quality was qualitatively and quantitatively validated in phantom scans and patient cases with liver lesions, using visual inspection, CT number accuracy, structural similarity index (SSIM), and radiomic feature analysis. Statistical tests were performed using Wilcoxon signed rank test. 
Main results. No statistically significant discrepancy (p>0.05) of CT numbers was observed between original and re-inserted tissue- and contrast-media-mimicking rods and hepatic lesions (mean ± standard deviation): rods 0.4 ± 2.3 HU, lesions -1.8 ± 6.4 HU. The original and inserted lesions showed similar morphological features at original and re-inserted locations: mean ± standard deviation of SSIM 0.95 ± 0.02. Additionally, the corresponding radiomic features presented highly similar feature clusters with no statistically significant differences (p>0.05). 
Significance. The proposed framework can generate patient PCD-CT exams with realistic liver lesions using archived patient data and lesion images. It will facilitate systematic evaluation of PCD-CT systems and advanced reconstruction and post-processing algorithms with target pathological features.

Intratumoral and peritumoral heterogeneity based on CT to predict the pathological response after neoadjuvant chemoimmunotherapy in esophageal squamous cell carcinoma.

Ling X, Yang X, Wang P, Li Y, Wen Z, Wang J, Chen K, Yu Y, Liu A, Ma J, Meng W

pubmed logopapersSep 19 2025
Neoadjuvant chemoimmunotherapy (NACI) regimen (camrelizumab plus paclitaxel and nedaplatin) has shown promising potential in patients with esophageal squamous cell carcinoma (ESCC), but accurately predicting the therapeutic response remains a challenge. To develop and validate a CT-based machine learning model that incorporates both intratumoral and peritumoral heterogeneity for predicting the pathological response of ESCC patients after NACI. Patients with ESCC who underwent surgery following NACI between June 2020 and July 2024 were included retrospectively and prospectively. Univariate and multivariate logistic regression analyses were performed to identify clinical variables associated with pathological response. Traditional radiomics features and habitat radiomics features from the intratumoral and peritumoral regions were extracted from post-treatment CT images, and six predictive models were established using 14 machine learning algorithms. The combined model was developed by integrating intratumoral and peritumoral habitat radiomics features with clinical variables. The performance of the models was evaluated using the area under the receiver operating characteristic curve (AUC). A total of 157 patients (mean [SD] age, 59.6 [6.5] years) were enrolled in our study, of whom 60 (38.2%) achieved major pathological response (MPR) and 40 (25.5%) achieved pathological complete response (pCR). The combined model demonstrated excellent predictive ability for MPR after NACI, with an AUC of 0.915 (95% CI, 0.844-0.981), accuracy of 0.872, sensitivity of 0.733, and specificity of 0.938 in the test set. In sensitivity analysis focusing on pCR, the combined model exhibited robust performance, with an AUC of 0.895 (95% CI, 0.782-0.980) in the test set. The combined model integrating intratumoral and peritumoral habitat radiomics features with clinical variables can accurately predict MPR in ESCC patients after NACI and shows promising potential in predicting pCR.

Visionerves: Automatic and Reproducible Hybrid AI for Peripheral Nervous System Recognition Applied to Endometriosis Cases

Giammarco La Barbera, Enzo Bonnot, Thomas Isla, Juan Pablo de la Plata, Joy-Rose Dunoyer de Segonzac, Jennifer Attali, Cécile Lozach, Alexandre Bellucci, Louis Marcellin, Laure Fournier, Sabine Sarnacki, Pietro Gori, Isabelle Bloch

arxiv logopreprintSep 18 2025
Endometriosis often leads to chronic pelvic pain and possible nerve involvement, yet imaging the peripheral nerves remains a challenge. We introduce Visionerves, a novel hybrid AI framework for peripheral nervous system recognition from multi-gradient DWI and morphological MRI data. Unlike conventional tractography, Visionerves encodes anatomical knowledge through fuzzy spatial relationships, removing the need for selection of manual ROIs. The pipeline comprises two phases: (A) automatic segmentation of anatomical structures using a deep learning model, and (B) tractography and nerve recognition by symbolic spatial reasoning. Applied to the lumbosacral plexus in 10 women with (confirmed or suspected) endometriosis, Visionerves demonstrated substantial improvements over standard tractography, with Dice score improvements of up to 25% and spatial errors reduced to less than 5 mm. This automatic and reproducible approach enables detailed nerve analysis and paves the way for non-invasive diagnosis of endometriosis-related neuropathy, as well as other conditions with nerve involvement.

Deep Learning Integration of Endoscopic Ultrasound Features and Serum Data Reveals <i>LTB4</i> as a Diagnostic and Therapeutic Target in ESCC.

Huo S, Zhang W, Wang Y, Qi J, Wang Y, Bai C

pubmed logopapersSep 18 2025
<b><i>Background:</i></b> Early diagnosis and accurate prediction of treatment response in esophageal squamous cell carcinoma (ESCC) remain major clinical challenges due to the lack of reliable and noninvasive biomarkers. Recently, artificial intelligence-driven endoscopic ultrasound image analysis has shown great promise in revealing genomic features associated with imaging phenotypes. <b><i>Methods:</i></b> A prospective study of 115 patients with ESCC was conducted. Deep features were extracted from endoscopic ultrasound using a ResNet50 convolutional neural network. Important features shared across three machine learning models (NN, GLM, DT) were used to construct an image-derived signature. Plasma levels of leukotriene B4 (<i>LTB4</i>) and other inflammatory markers were measured using enzyme-linked immunosorbent assay. Correlations between signature and inflammation markers were analyzed, followed by logistic regression and subgroup analyses. <b><i>Results:</i></b> The endoscopic ultrasound image-derived signature, generated using deep learning algorithms, effectively distinguished esophageal cancer from normal esophageal tissue. Among all inflammatory markers, <i>LTB4</i> exhibited the strongest negative correlation with the image signature and showed significantly higher expression in the healthy control group. Multivariate logistic regression analysis identified <i>LTB4</i> as an independent risk factor for ESCC (odds ratio = 1.74, <i>p</i> = 0.037). Furthermore, <i>LTB4</i> expression was significantly associated with patient sex, age, and chemotherapy response. Notably, higher <i>LTB4</i> levels were linked to an increased likelihood of achieving a favorable therapeutic response. <b><i>Conclusions:</i></b> This study demonstrates that deep learning-derived endoscopic ultrasound image features can effectively distinguish ESCC from normal esophageal tissue. By integrating image features with serological data, the authors identified <i>LTB4</i> as a key inflammation-related biomarker with significant diagnostic and therapeutic predictive value.

Development and validation of machine learning predictive models for gastric volume based on ultrasonography: A multicentre study.

Liu J, Li S, Li M, Li G, Huang N, Shu B, Chen J, Zhu T, Huang H, Duan G

pubmed logopapersSep 18 2025
Aspiration of gastric contents is a serious complication associated with anaesthesia. Accurate prediction of gastric volume may assist in risk stratification and help prevent aspiration. This study aimed to develop and validate machine learning models to predict gastric volume based on ultrasound and clinical features. This cross-sectional multicentre study was conducted at two hospitals and included adult patients undergoing gastroscopy under intravenous anaesthesia. Patients from Centre 1 were prospectively enrolled and randomly divided into a training set (Cohort A, n = 415) and an internal validation set (Cohort B, n = 179), while patients from Centre 2 were used as an external validation set (Cohort C, n = 199). The primary outcome was gastric volume, which was measured by endoscopic aspiration immediately following ultrasonographic examination. Least absolute shrinkage and selection operator (LASSO) regression was used for feature selection, and eight machine learning models were developed and evaluated using Bland-Altman analysis. The models' ability to predict medium-to-high and high gastric volumes was assessed. The top-performing models were externally validated, and their predictive performance was compared with the traditional Perlas model. Among the 793 enrolled patients, the number and proportion of patients with high gastric volume were as follows: 23 (5.5 %) in the development cohort, 10 (5.6 %) in the internal validation cohort, and 3 (1.5 %) in the external validation cohort. Eight models were developed using age, cross-sectional area of gastric antrum in right lateral decubitus (RLD-CSA) position, and Perlas grade, with these variables selected through LASSO regression. In internal validation, Bland-Altman analysis showed that the Perlas model overestimated gastric volume (mean bias 23.5 mL), while the new models provided accurate estimates (mean bias -0.1 to 2.0 mL). The models significantly improved prediction of medium-high gastric volume (area under the curve [AUC]: 0.74-0.77 vs. 0.63) and high gastric volume (AUC: 0.85-0.94 vs. 0.74). The best-performing adaptive boosting and linear regression models underwent externally validation, with AUCs of 0.81 (95 % confidence interval [CI], 0.74-0.89) and 0.80 (95 %CI, 0.72-0.89) for medium-high and 0.96 (95 %CI, 0.91-1) and 0.96 (95 %CI, 0.89-1) for high gastric volume. We propose a novel machine learning-based predictive model that outperforms Perlas model by incorporating the key features of age, RLD-CSA, and Perlas grade, enabling accurate prediction of gastric volume.

The Role of Artificial Intelligence, Including Endoscopic Diagnosis, in the Prediction of Presence, Bleeding, and Mortality of Esophageal Varices.

Furuichi Y, Nishiguchi R, Furuichi Y, Kobayashi S, Fujiwara T, Sato K

pubmed logopapersSep 18 2025
Esophagogastric varices (EGVs) are a disease that occurs as a complication of the progression of liver cirrhosis, and since bleeding can be fatal, regular endoscopy is necessary. With the development of artificial intelligence (AI) in recent years, it is beginning to be applied to predicting the presence of EGVs, predicting bleeding, and making a diagnosis and prognosis. Based on previous reports, application methods of AI can be classified into the following four categories: (1) noninvasive prediction using clinical data obtained from clinical records such as laboratory data, past history, and present illness, (2) invasive detection and prediction using endoscopy and computed tomography (CT), (3) invasive prediction using multimodal AI (clinical data and endoscopy), (4) invasive virtual measurement on the image of endoscopy and CT. These methods currently allow for the use of AI in the following ways: (1) prediction of EGVs existence, variceal grade, bleeding risk, and survival rate, (2) detection and diagnosis of esophageal varices (EVs), (3) prediction of bleeding within 1 year, (4) prediction of variceal diameter and portal pressure gradient. This review explores current studies on AI applications in assessing EGVs, highlighting their benefits, limitations, and future directions.

Optimized deep learning-accelerated single-breath-hold abdominal HASTE with and without fat saturation improves and accelerates abdominal imaging at 3 Tesla.

Tan Q, Kubicka F, Nickel D, Weiland E, Hamm B, Geisel D, Wagner M, Walter-Rittel TC

pubmed logopapersSep 18 2025
Deep learning-accelerated single-shot turbo-spin-echo techniques (DL-HASTE) enable single-breath-hold T2-weighted abdominal imaging. However, studies evaluating the image quality of DL-HASTE with and without fat saturation (FS) remain limited. This study aimed to prospectively evaluate the technical feasibility and image quality of abdominal DL-HASTE with and without FS at 3 Tesla. DL-HASTE of the upper abdomen was acquired with variable sequence parameters regarding FS, flip angle (FA) and field of view (FOV) in 10 healthy volunteers and 50 patients. DL-HASTE sequences were compared to clinical sequences (HASTE, HASTE-FS and T2-TSE-FS BLADE). Two radiologists independently assessed the sequences regarding scores of overall image quality, delineation of abdominal organs, artifacts and fat saturation using a Likert scale (range: 1-5). Breath-hold time of DL-HASTE and DL-HASTE-FS was 21 ± 2 s with fixed FA and 20 ± 2 s with variable FA (p < 0.001), with no overall image quality difference (p > 0.05). DL-HASTE required a 10% larger FOV than DL-HASTE-FS to avoid aliasing artifacts from subcutaneous fat. Both DL-HASTE and DL-HASTE-FS had significantly higher overall image quality scores than standard HASTE acquisitions (DL-HASTE vs. HASTE: 4.8 ± 0.40 vs. 4.1 ± 0.50; DL-HASTE-FS vs. HASTE-FS: 4.6 ± 0.50 vs. 3.6 ± 0.60; p < 0.001). Compared to the T2-TSE-FS BLADE, DL-HASTE-FS provided higher overall image quality (4.6 ± 0.50 vs. 4.3 ± 0.63, p = 0.011). DL-HASTE achieved significant higher image quality (p = 0.006) and higher sharpness score of organs compared to DL-HASTE-FS (p < 0.001). Deep learning-accelerated HASTE with and without fat saturation were both feasible at 3 Tesla and showed improved image quality compared to conventional sequences. Not applicable.

Bridging the quality gap: Robust colon wall segmentation in noisy transabdominal ultrasound.

Gago L, González MAF, Engelmann J, Remeseiro B, Igual L

pubmed logopapersSep 18 2025
Colon wall segmentation in transabdominal ultrasound is challenging due to variations in image quality, speckle noise, and ambiguous boundaries. Existing methods struggle with low-quality images due to their inability to adapt to varying noise levels, poor boundary definition, and reduced contrast in ultrasound imaging, resulting in inconsistent segmentation performance. We present a novel quality-aware segmentation framework that simultaneously predicts image quality and adapts the segmentation process accordingly. Our approach uses a U-Net architecture with a ConvNeXt encoder backbone, enhanced with a parallel quality prediction branch that serves as a regularization mechanism. Our model learns robust features by explicitly modeling image quality during training. We evaluate our method on the C-TRUS dataset and demonstrate superior performance compared to state-of-the-art approaches, particularly on challenging low-quality images. Our method achieves Dice scores of 0.7780, 0.7025, and 0.5970 for high, medium, and low-quality images, respectively. The proposed quality-aware segmentation framework represents a significant step toward clinically viable automated colon wall segmentation systems.

Deep Learning for Automated Measures of SUV and Molecular Tumor Volume in [<sup>68</sup>Ga]PSMA-11 or [<sup>18</sup>F]DCFPyL, [<sup>18</sup>F]FDG, and [<sup>177</sup>Lu]Lu-PSMA-617 Imaging with Global Threshold Regional Consensus Network.

Jackson P, Buteau JP, McIntosh L, Sun Y, Kashyap R, Casanueva S, Ravi Kumar AS, Sandhu S, Azad AA, Alipour R, Saghebi J, Kong G, Jewell K, Eifer M, Bollampally N, Hofman MS

pubmed logopapersSep 18 2025
Metastatic castration-resistant prostate cancer has a high rate of mortality with a limited number of effective treatments after hormone therapy. Radiopharmaceutical therapy with [<sup>177</sup>Lu]Lu-prostate-specific membrane antigen-617 (LuPSMA) is one treatment option; however, response varies and is partly predicted by PSMA expression and metabolic activity, assessed on [<sup>68</sup>Ga]PSMA-11 or [<sup>18</sup>F]DCFPyL and [<sup>18</sup>F]FDG PET, respectively. Automated methods to measure these on PET imaging have previously yielded modest accuracy. Refining computational workflows and standardizing approaches may improve patient selection and prognostication for LuPSMA therapy. <b>Methods:</b> PET/CT and quantitative SPECT/CT images from an institutional cohort of patients staged for LuPSMA therapy were annotated for total disease burden. In total, 676 [<sup>68</sup>Ga]PSMA-11 or [<sup>18</sup>F]DCFPyL PET, 390 [<sup>18</sup>F]FDG PET, and 477 LuPSMA SPECT images were used for development of automated workflow and tested on 56 cases with externally referred PET/CT staging. A segmentation framework, the Global Threshold Regional Consensus Network, was developed based on nnU-Net, with processing refinements to improve boundary definition and overall label accuracy. <b>Results:</b> Using the model to contour disease extent, the mean volumetric Dice similarity coefficient for [<sup>68</sup>Ga]PSMA-11 or [<sup>18</sup>F]DCFPyL PET was 0.94, for [<sup>18</sup>F]FDG PET was 0.84, and for LuPSMA SPECT was 0.97. On external test cases, Dice accuracy was 0.95 and 0.84 on PSMA and FDG PET, respectively. The refined models yielded consistent improvements compared with nnU-Net, with an increase of 3%-5% in Dice accuracy and 10%-17% in surface agreement. Quantitative biomarkers were compared with a human-defined ground truth using the Pearson coefficient, with scores for [<sup>68</sup>Ga]PSMA-11 or [<sup>18</sup>F]DCFPyL, [<sup>18</sup>F]FDG, and LuPSMA, respectively, of 0.98, 0.94, and 0.99 for disease volume; 0.98, 0.88, and 0.99 for SUV<sub>mean</sub>; 0.96, 0.91, and 0.99 for SUV<sub>max</sub>; and 0.97, 0.96, and 0.99 for volume intensity product. <b>Conclusion:</b> Delineation of disease extent and tracer avidity can be performed with a high degree of accuracy using automated deep learning methods. By incorporating threshold-based postprocessing, the tools can closely match the output of manual workflows. Pretrained models and scripts to adapt to institutional data are provided for open use.
Page 10 of 99986 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.