Sort by:
Page 16 of 99990 results

Geometric-Driven Cross-Modal Registration Framework for Optical Scanning and CBCT Models in AR-Based Maxillofacial Surgical Navigation.

Liu Y, Wang E, Gong M, Tao B, Wu Y, Qi X, Chen X

pubmed logopapersSep 4 2025
Accurate preoperative planning for dental implants, especially in edentulous or partially edentulous patients, relies on precise localization of radiographic templates that guide implant positioning. By wearing a patientspecific radiographic template, clinicians can better assess anatomical constraints and plan optimal implant paths. However, due to the low radiopacity of such templates, their spatial position is difficult to determine directly from cone-beam computed tomography (CBCT) scans. To overcome this limitation, high-resolution optical scans of the templates are acquired, providing detailed geometric information for accurate spatial registration. This paper proposes a geometric-driven cross-modal registration framework that aligns the optical scan model of the radiographic template with patient CBCT data, enhancing registration accuracy through geometric feature extraction such as curvature and occlusal contours. A hybrid deep learning workflow further improves robustness, achieving a root mean square error (RMSE) of 1.68mm and mean absolute error (MAE) of 1.25mm. The system also incorporates augmented reality (AR) for real-time surgical navigation. Clinical and phantom experiments validate its effectiveness in supporting precise implant path planning and execution. Our proposed system enhances the efficiency and safety of dental implant surgery by integrating geometric feature extraction, deep learning-based registration, and AR-assisted navigation.

Summary from the 2025 International Society for Magnetic Resonance in Medicine workshop on body MRI: Unsolved problems and unmet needs.

Hecht EM, Hu HH, Serai SD, Wu HH, Brunsing RL, Guimaraes AR, Kurugol S, Ringe KI, Syed AB

pubmed logopapersSep 4 2025
In March of 2025, 145 attendees convened at the Hub for Clinical Collaboration of the Children's Hospital of Philadelphia for the inaugural International Society for Magnetic Resonance in Medicine (ISMRM) Body MRI Study Group workshop entitled "Body MRI: Unsolved Problems and Unmet Needs." Approximately 24% of the attendees were MD or MD/PhD's, 45% were PhD's, and 30% were early-career trainees and postdoctoral associates. Among the invited speakers and moderators, 28% were from outside the United States, with a 40:60% female-to-male ratio. The 2.5-day program brought together a multidisciplinary group of scientists, radiologists, technologists, and trainees. Session topics included quantitative imaging biomarkers, low- and high-field strengths, artifact and motion correction, rapid imaging and focused protocols, and artificial intelligence. Another key session focused on the importance of team science and allowed speakers from academia and industry to share their personal experiences and offer advice on how to successfully translate new MRI technology into clinical practice. This article summarizes key points from the event and perceived unmet clinical needs within the field of body MRI.

A Cascaded Segmentation-Classification Deep Learning Framework for Preoperative Prediction of Occult Peritoneal Metastasis and Early Recurrence in Advanced Gastric Cancer.

Zou T, Chen P, Wang T, Lei T, Chen X, Yang F, Lin X, Li S, Yi X, Zheng L, Lin Y, Zheng B, Song J, Wang L

pubmed logopapersSep 4 2025
To develop a cascaded deep learning (DL) framework integrating tumor segmentation with metastatic risk stratification for preoperative prediction of occult peritoneal metastasis (OPM) in advanced gastric cancer (GC), and validate its generalizability for early peritoneal recurrence (PR) prediction. This multicenter study enrolled 765 patients with advanced GC from three institutions. We developed a two-stage framework as follows: (1) V-Net-based tumor segmentation on CT; (2) DL-based metastatic risk classification using segmented tumor regions. Clinicopathological predictors were integrated with deep learning probabilities to construct a combined model. Validation cohorts comprised: Internal validation (Test1 for OPM, n=168; Test2 for early PR, n=212) and External validation (Test3 for early PR, n=57 from two independent centers). Multivariable analysis identified Borrmann type (OR=1.314, 95% CI: 1.239-1.394), CA125 ≥35U/mL (OR=1.301, 95% CI: 1.127-1.499), and CT-N+ stage (OR=1.259, 95% CI: 1.124-1.415) as independent OPM predictors. The combined model demonstrated robust performance for both OPM and early PR prediction: achieving AUCs of 0.938 (Train) and 0.916 (Test1) for OPM with improvements over clinical (∆AUC +0.039-+0.107) and DL-only models (∆AUC +0.044-+0.104), while attaining AUC 0.820-0.825 for early PR (Test2 and Test3) with balanced sensitivity (79.7-88.9%) and specificity (72.4-73.3%). Decision curve analysis confirmed net clinical benefit across clinical thresholds. This CT-based cascaded framework enables reliable preoperative risk stratification for OPM and early PR in advanced GC, potentially refining indications for personalized therapeutic pathways.

Predicting first-trimester pregnancy outcome in threatened miscarriage: A comparison of a multivariate logistic regression and machine learning models.

Sammut L, Bezzina P, Gibbs V, Muscat-Baron Y, Agius-Camenzuli A, Calleja-Agius J

pubmed logopapersSep 4 2025
Threatened miscarriage (TM), defined as first-trimester vaginal bleeding with a closed cervix and detectable fetal cardiac activity, affects up to 30 % of clinically recognised pregnancies and is linked to increased risk of adverse outcomes. This study evaluates the predictive value of first-trimester ultrasound (US) and biochemical (BC) markers in determining outcomes among women with TM symptoms. This prospective cohort study recruited 118 women with viable singleton pregnancies (5<sup>+0</sup> to 12<sup>+6</sup> weeks' gestation) from Malta's national public hospital between January 2023 and June 2024. Participants underwent US and BC assessment, along with collection of clinical and sociodemographic data. Pregnancy outcomes were followed to term and classified as live birth or loss. Univariate logistic regression identified individual predictors. Multivariate logistic regression (MLR) and random forest (RF) modelling assessed combined predictive performance. Among 118 TM cases, 77 % resulted in live birth, 23 % in loss. MLR identified progesterone, cervical length, mean gestational sac diameter (MGSD), trophoblast thickness, sFlt-1:PlGF ratio, and maternal age as significant predictors. Higher progesterone, cervical length, MGSD, and sFlt-1:PlGF ratio reduced risk, while maternal age over 35 increased it. MLR achieved 82.7 % accuracy (AUC = 0.89). RF improved accuracy to 93.1 % (AUC = 0.97), confirming the combined predictive value of US and BC markers. US and BC markers hold predictive value in TM. Machine learning, particularly RF, may improve early clinical risk stratification. This tool may support timely decision-making and personalised monitoring, intervention, and counselling for women with TM.

Multi-task deep learning for automatic image segmentation and treatment response assessment in metastatic ovarian cancer.

Drury B, Machado IP, Gao Z, Buddenkotte T, Mahani G, Funingana G, Reinius M, McCague C, Woitek R, Sahdev A, Sala E, Brenton JD, Crispin-Ortuzar M

pubmed logopapersSep 3 2025
 : High-grade serous ovarian carcinoma (HGSOC) is characterised by significant spatial and temporal heterogeneity, often presenting at an advanced metastatic stage. One of the most common treatment approaches involves neoadjuvant chemotherapy (NACT), followed by surgery. However, the multi-scale complexity of HGSOC poses a major challenge in evaluating response to NACT.  : Here, we present a multi-task deep learning approach that facilitates simultaneous segmentation of pelvic/ovarian and omental lesions in contrast-enhanced computerised tomography (CE-CT) scans, as well as treatment response assessment in metastatic ovarian cancer. The model combines multi-scale feature representations from two identical U-Net architectures, allowing for an in-depth comparison of CE-CT scans acquired before and after treatment. The network was trained using 198 CE-CT images of 99 ovarian cancer patients for predicting segmentation masks and evaluating treatment response.  : It achieves an AUC of 0.78 (95% CI [0.70-0.91]) in an independent cohort of 98 scans of 49 ovarian cancer patients from a different institution. In addition to the classification performance, the segmentation Dice scores are only slightly lower than the current state-of-the-art for HGSOC segmentation.  : This work is the first to demonstrate the feasibility of a multi-task deep learning approach in assessing chemotherapy-induced tumour changes across the main disease burden of patients with complex multi-site HGSOC, which could be used for treatment response evaluation and disease monitoring.

Automated Kidney Tumor Segmentation in CT Images Using Deep Learning: A Multi-Stage Approach.

Kan HC, Fan GM, Wei MH, Lin PH, Shao IH, Yu KJ, Chien TH, Pang ST, Wu CT, Peng SJ

pubmed logopapersSep 3 2025
Computed tomography (CT) remains the primary modality for assessing renal tumors; however, tumor identification and segmentation rely heavily on manual interpretation by clinicians, which is time-consuming and subject to inter-observer variability. The heterogeneity of tumor appearance and indistinct margins further complicate accurate delineation, impacting histopathological classification, treatment planning, and prognostic assessment. There is a pressing clinical need for an automated segmentation tool to enhance diagnostic workflows and support clinical decision-making with results that are reliable, accurate, and reproducible. This study developed a fully automated pipeline based on the DeepMedic 3D convolutional neural network for the segmentation of kidneys and renal tumors through multi-scale feature extraction. The model was trained and evaluated using 5-fold cross-validation on a dataset of 382 contrast-enhanced CT scans manually annotated by experienced physicians. Image preprocessing included Hounsfield unit conversion, windowing, 3D reconstruction, and voxel resampling. Post-processing was also employed to refine output masks and improve model generalizability. The proposed model achieved high performance in kidney segmentation, with an average Dice coefficient of 93.82 ± 1.38%, precision of 94.86 ± 1.59%, and recall of 93.66 ± 1.77%. In renal tumor segmentation, the model attained a Dice coefficient of 88.19 ± 1.24%, precision of 90.36 ± 1.90%, and recall of 88.23 ± 2.02%. Visual comparisons with ground truth annotations confirmed the clinical relevance and accuracy of the predictions. The proposed DeepMedic-based framework demonstrates robust, accurate segmentation of kidneys and renal tumors on CT images. With its potential for real-time application, this model could enhance diagnostic efficiency and treatment planning in renal oncology.

Voxel-level Radiomics and Deep Learning Based on MRI for Predicting Microsatellite Instability in Endometrial Carcinoma: A Two-center Study.

Tian CH, Sun P, Xiao KY, Niu XF, Li XS, Xu N

pubmed logopapersSep 3 2025
To develop and validate a non-invasive deep learning model that integrates voxel-level radiomics with multi-sequence MRI to predict microsatellite instability (MSI) status in patients with endometrial carcinoma (EC). This two-center retrospective study included 375 patients with pathologically confirmed EC from two medical centers. Patients underwent preoperative multiparametric MRI (T2WI, DWI, CE-T1WI), and MSI status was determined by immunohistochemistry. Tumor regions were manually segmented, and voxel-level radiomics features were extracted following IBSI guidelines. A dual-channel 3D deep neural network based on the Vision-Mamba architecture was constructed to jointly process voxel-wise radiomics feature maps and MR images. The model was trained and internally validated on cohorts from Center I and tested on an external cohort from Center II. Performance was compared with Vision Transformer, 3D-ResNet, and traditional radiomics models. Interpretability was assessed with feature importance ranking and SHAP value visualization. The Vision-Mamba model achieved strong predictive performance across all datasets. In the external test cohort, it yielded an AUC of 0.866, accuracy of 0.875, sensitivity of 0.833, and specificity of 0.900, outperforming other models. Integrating voxel-level radiomics features with MRI enabled the model to better capture both local and global tumor heterogeneity compared to traditional approaches. Interpretability analysis identified glszm_SizeZoneNonUniformityNormalized, ngtdm_Busyness, and glcm_Correlation as top features, with SHAP analysis revealing that tumor parenchyma, regions of enhancement, and diffusion restriction were pivotal for MSI prediction. The proposed voxel-level radiomics and deep learning model provides a robust, non-invasive tool for predicting MSI status in endometrial carcinoma, potentially supporting personalized treatment decision-making.

End-to-end deep learning model with multi-channel and attention mechanisms for multi-class diagnosis in CT-T staging of advanced gastric cancer.

Liu B, Jiang P, Wang Z, Wang X, Wang Z, Peng C, Liu Z, Lu C, Pan D, Shan X

pubmed logopapersSep 3 2025
Homogeneous AI assessment is required for CT-T staging of gastric cancer. To construct an End-to-End CT-based Deep Learning (DL) model for tumor T-staging in advanced gastric cancer. A retrospective study was conducted on 460 cases of presurgical CT patients with advanced gastric cancer between 2011 and 2024. A Three-dimensional (3D)-Convolution (Conv)-UNet based automatic segmentation model was employed to segment tumors, and a SmallFocusNet-based ternary classification model was built for CT-T staging. Finally, these models were integrated to create an end-to-end DL model. The segmentation model's performance was assessed using the Dice similarity coefficient (DSC), Intersection over Union (IoU) and 95 % Hausdorff Distance (HD_95), while the classification model's performance was measured with thearea under the Receiver Operating Characteristic curve (AUC), sensitivity, specificity, and F1-score.Eventually, the end-to-end DL model was compared with the radiologist using the McNemar test. The data were divided into Dataset 1(423 cases for training and test set, mean age, 65.0 years ± 9.46 [SD]) and Dataset 2(37 cases for independent validation set, mean age, 68.8 years ± 9.28 [SD]). For segmentation task, the model achieved a DSC of 0.860 ± 0.065, an IoU of 0.760 ± 0.096 in test set of Dataset 1, and a DSC of 0.870 ± 0.164, an IoU of 0.793 ± 0.168 in Dataset 2. For classification task,the model demonstrated a macro-average AUC of 0.882(95 % CI 0.812-0.926), an average sensitivity of 76.9 % (95 % CI 67.6 %-85.3 %) in test set of Dataset 1 and a macro-average AUC of 0.862(95 % CI 0.723-0.942), an average sensitivity of 76.3 % (95 % CI 59.8 %-90.0 %) in Dataset 2. Meanwhile, the DL model's performance was better than that of radiologist (Accuracy was 91.9 %vs82.1 %, P = 0.007). The end-to-end DL model for CT-T staging is highly accurate and consistent in pre-treatment staging of advanced gastric cancer.

Disentangled deep learning method for interior tomographic reconstruction of low-dose X-ray CT.

Chen C, Zhang L, Gao H, Wang Z, Xing Y, Chen Z

pubmed logopapersSep 3 2025
Objective&#xD;Low-dose interior tomography integrates low-dose CT (LDCT) with region-of-interest (ROI) imaging which finds wide application in radiation dose reduction and high-resolution imaging. However, the combined effects of noise and data truncation pose great challenges for accurate tomographic reconstruction. This study aims to develop a novel reconstruction framework that achieves high-quality ROI reconstruction and efficient extension of recoverable region to provide innovative solutions to address coupled ill-posed problems.&#xD;Approach&#xD;We conducted a comprehensive analysis of projection data composition and angular sampling patterns in low-dose interior tomography. Based on this analysis, we proposed two novel deep learning-based reconstruction pipelines: (1) Deep Projection Extraction-based Reconstruction (DPER) that focuses on ROI reconstruction by disentangling and extracting noise and background projection contributions using a dual-domain deep neural network; and (2) DPER with Progressive extension (DPER-Pro) that enhances DPER by a progressive "coarse-to-fine" strategy for missing data compensation, enabling simultaneous ROI reconstruction and extension of recoverable regions. The proposed methods were rigorously evaluated through extensive experiments on simulated torso datasets and real CT scans of a torso phantom.&#xD;Main Results&#xD;The experimental results demonstrated that DPER effectively handles the coupled ill-posed problem and achieves high-quality ROI reconstructions by accurately extracting noise and background projections. DPER-Pro extends the recoverable region while preserving ROI image quality by leveraging disentangled projection components and angular sampling patterns. Both methods outperform competing approaches in reconstructing reliable structures, enhancing generalization, and mitigating noise and truncation artifacts.&#xD;Significance&#xD;This work presents a novel decoupled deep learning framework for low-dose interior tomography that provides a robust and effective solution to the challenges posed by noise and truncated projections. The proposed methods significantly improve ROI reconstruction quality while efficiently recovering structural information in exterior regions, offering a promising pathway for advancing low-dose ROI imaging across a wide range of applications.&#xD.

Decoding Fibrosis: Transcriptomic and Clinical Insights via AI-Derived Collagen Deposition Phenotypes in MASLD

Wojciechowska, M. K., Thing, M., Hu, Y., Mazzoni, G., Harder, L. M., Werge, M. P., Kimer, N., Das, V., Moreno Martinez, J., Prada-Medina, C. A., Vyberg, M., Goldin, R., Serizawa, R., Tomlinson, J., Douglas Gaalsgard, E., Woodcock, D. J., Hvid, H., Pfister, D. R., Jurtz, V. I., Gluud, L.-L., Rittscher, J.

medrxiv logopreprintSep 2 2025
Histological assessment is foundational to multi-omics studies of liver disease, yet conventional fibrosis staging lacks resolution, and quantitative metrics like collagen proportionate area (CPA) fail to capture tissue architecture. While recent AI-driven approaches offer improved precision, they are proprietary and not accessible to academic research. Here, we present a novel, interpretable AI-based framework for characterising liver fibrosis from picrosirius red (PSR)-stained slides. By identifying distinct data-driven collagen deposition phenotypes (CDPs) which capture distinct morphologies, our method substantially improves the sensitivity and specificity of downstream transcriptomic and proteomic analyses compared to CPA and traditional fibrosis scores. Pathway analysis reveals that CDPs 4 and 5 are associated with active extracellular matrix remodelling, while phenotype correlates highlight links to liver functional status. Importantly, we demonstrate that selected CDPs can predict clinical outcomes with similar accuracy to established fibrosis metrics. All models and tools are made freely available to support transparent and reproducible multi-omics pathology research. HighlightsO_LIWe present a set of data-driven collagen deposition phenotypes for analysing PSR-stained liver biopsies, offering a spatially informed alternative to conventional fibrosis staging and CPA available as open-source code. C_LIO_LIThe identified collagen deposition phenotypes enhance transcriptomic and proteomic signal detection, revealing active ECM remodelling and distinct functional tissue states. C_LIO_LISelected phenotypes predict clinical outcomes with performance comparable to fibrosis stage and CPA, highlighting their potential as candidate quantitative indicators of fibrosis severity. C_LI O_FIG O_LINKSMALLFIG WIDTH=200 HEIGHT=98 SRC="FIGDIR/small/25334719v1_ufig1.gif" ALT="Figure 1"> View larger version (22K): [email protected]@1793532org.highwire.dtl.DTLVardef@93a0d8org.highwire.dtl.DTLVardef@24d289_HPS_FORMAT_FIGEXP M_FIG C_FIG
Page 16 of 99990 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.