Sort by:
Page 55 of 6226216 results

Raith S, Pankert T, Jaganathan S, Pankert K, Lee H, Peters F, Hölzle F, Modabber A

pubmed logopapersOct 11 2025
Mandibular reconstruction following continuity resection due to tumor ablation or osteonecrosis remains a significant challenge in maxillofacial surgery. Virtual surgical planning (VSP) relies on accurate segmentation of the mandible, yet existing AI models typically include teeth, making them unsuitable for planning of autologous transplants dimensions aiming for reconstructing edentulous mandibles optimized for dental implant insertion. This study investigates the feasibility of using deep learning-based segmentation to generate anatomically valid, toothless mandibles from dentate CT scans, ensuring geometric accuracy for reconstructive planning. A two-stage convolutional neural network (CNN) approach was employed to segment mandibles from computed tomography (CT) data. The dataset (n = 246) included dentate, partially dentate, and edentulous mandibles. Ground truth segmentations were manually modified to create Class III (moderate alveolar atrophy) and Class V (severe atrophy) models, representing different degrees of post-extraction bone resorption. The AI models were trained on the original (O), Class III (Cl. III), and Class V (Cl. V) datasets, and performance was evaluated using Dice similarity coefficients (DSC), average surface distance, and automatically detected anatomical curvatures. AI-generated segmentations demonstrated high anatomical accuracy across all models, with mean DSCs exceeding 0.94. Accuracy was highest in edentulous mandibles (DSC 0.96 ± 0.014) and slightly lower in fully dentate cases, particularly for Class V modifications (DSC 0.936 ± 0.030). The caudolateral curve remained consistent, confirming that baseline mandibular geometry was preserved despite alveolar ridge modifications. This study confirms that AI-driven segmentation can generate anatomically valid edentulous mandibles from dentate CT scans with high accuracy. The innovation of the work is the precise adaptation of alveolar ridge geometry, making it a valuable tool for patient-specific virtual surgical planning in mandibular reconstruction.

Shinkawa H, Ueda D, Kurimoto S, Kaibori M, Ueno M, Yasuda S, Ikoma H, Aihara T, Nakai T, Kinoshita M, Kosaka H, Hayami S, Matsuo Y, Morimura R, Nakajima T, Nobori C, Ishizawa T

pubmed logopapersOct 11 2025
No reports described the deep-learning (DL) models using computed tomography (CT) as an imaging biomarker for predicting postoperative long-term outcomes in patients with hepatocellular carcinoma (HCC). This study aimed to validate the DL models for individualized prognostication after HCC resection using CT as an imaging biomarker. This study included 1733 patients undergoing hepatic resection for solitary HCC. Participants were classified into training, validation, and test datasets. DL predictive models were developed using clinical variables and CT imaging to predict recurrence within 2 and 5 years and overall survival (OS) of > 5 and > 10 years postoperatively. Youden index was utilized to identify cutoff values. Permutation importance was used to calculate the importance of each explanatory variable. DL predictive models for recurrence within 2 and 5 years and OS of > 5 and > 10 years postoperatively were developed in the test datasets, with the area under the curve of 0.70, 0.70, 0.80, and 0.80, respectively. Permutation importance demonstrated that CT imaging analysis revealed the highest importance value. The postoperative recurrence rates within 2 and 5 years were 52.6% versus 18.5% (p < 0.001) and 78.9% versus 46.7% (p < 0.001) and overall mortality within 5 and 10 years postoperatively were 45.1% versus 9.2% (p < 0.001) and 87.1% versus 43.2% (p < 0.001) in the high-risk versus low-risk groups, respectively. Our DL models using CT as an imaging biomarker are useful for individualized prognostication and may help optimize treatment planning for patients with HCC.

Qian X, Shu Z, Souri S, Zhao Y, Yin Z, Farrell RF, Kim J, Wu J, Ryu S, Zhang T

pubmed logopapersOct 11 2025
C-arm cone-beam computed tomography (CBCT) in-suite imaging is often used in a brachytherapy suite. However, due to the limited rotation angle of the C-arm gantry and the dimension of the flat panel imager (FPI), CBCT images are often truncated and not suitable for treatment planning. In this simulation study, we present the design of a novel ultra-compact mobile dual-source CBCT (dCBCT) that can scan large field of view with half system rotation. Enabled by deep learning image reconstruction, it can perform ultra-short scans and stereoscopic imaging before and during high dose rate (HDR) treatments. The dCBCT comprises two x-ray sources and a flat panel imager mounted on a C-arm gantry. The dual-sources configuration enables real-time stereoscopic imaging, also avoids data truncation problem of conventional C-arm CBCT. Simulation studies were performed to prove the concept of ultra-short scan of dCBCT. Deep Image Prior (DIP) image reconstruction without and with a Prior was also developed to reduce the scan angle. The simulation studies of dCBCT show that it can achieve a sufficient reconstruction field of view with 180° rotation. DIP reconstruction reduces scanning angle to 135° without sacrificing image quality. With body profile as constraint, ultra-short scan with merely 90° system rotation can be achieved. Powered by deep-learning based limited-angle image reconstruction, dCBCT can scan full body with a short scan, allowing rapid 3D and real-time planar imaging in brachytherapy suite.

Zhuang J, Chen Y, Zheng W, Lian G, Zhu Y, Zhang H, Fan X, Fu F, Ye Q

pubmed logopapersOct 11 2025
To investigate whether ultrasound-based radiomic features can be used for the prediction of human epidermal growth factor receptor 2 (HER2) expression. This study retrospectively analyzed the pre-operative ultrasound data of 113 patients with urothelial carcinoma of the bladder who were classified into training (n = 67) and test (n = 46) sets. Least absolute shrinkage and selection operator (LASSO) regression was applied to identify the most discriminative radiomic features for evaluating HER2 status and seven radiomics-based machine learning models were developed. The discriminative performance of the models was evaluated using metrics including area under the receiver operator characteristic curves (AUROCs). A nomogram based on logistic regression was established to visualize the predictive model combining clinical and radiomic signatures. Ultimately, seven radiomic features for HER2 status prediction were identified, six of which were derived from the wavelet images. Shapley Additive exPlanations analysis revealed that wavelet_LHH_glcm_MCC had the highest weight in predicting HER2 expression. All of the radiomics-based prediction models achieved an area under the curve of more than 0.72 in the test set. The combining nomogram exhibited areas under the curve of 0.827 (95% CI: 0.723-0.931) in the training set and 0.784 (95% CI: 0.616-0.953) in the test set, respectively. Ultrasound-based radiomic features, especially the wavelet transform-based texture features, show potential for non-invasive HER2 status classification in urothelial carcinoma of the bladder.

Zhang H, Liu K, Ding Y, Li H, Liang J, Yu H, Yin K

pubmed logopapersOct 11 2025
Accurate preoperative classification of pulmonary nodules (PNs) is critical for guiding clinical decision-making and preventing overtreatment. This study aims to evaluate the predictive performance of artificial intelligence (AI)-based quantitative computed tomography (CT) feature analysis in differentiating among four pathological types of PNs: atypical adenomatous hyperplasia and adenocarcinoma in situ (AAH + AIS), minimally invasive adenocarcinoma (MIA), invasive adenocarcinoma (IAC), and lung inflammatory nodules (IN). A total of 462 pathologically confirmed PNs were analyzed. Radiomic features, including CT attenuation metrics, 3D morphometrics, and texture parameters such as entropy and skewness, were extracted using a deep learning-based AI platform. Logistic regression models were constructed using both single- and multi-variable strategies to evaluate the classification accuracy of these features. Moreover, the inclusion of IN as a separate category significantly enhanced the clinical utility of AI in differentiating benign mimickers from malignant nodules. The combined model, which integrated AI-derived features with traditional CT signs, was used to assess the diagnostic performance of the radiomic features in differentiating the four pathological types of nodules. The combined model demonstrated superior diagnostic performance, with area under the curve (AUC) values of 0.936 for IAC, 0.884 for AAH + AIS, and 0.865 for IN. Although MIA showed lower classification accuracy (AUC = 0.707), key features such as entropy, solid component ratio, and total volume effectively distinguished invasive from non-invasive lesions. These findings highlight the potential of AI-enhanced radiomics for supporting non-invasive, objective, and individualized diagnosis of PNs. Question Can artificial intelligence (AI)-based quantitative CT analysis reliably differentiate benign inflammatory nodules from the spectrum of lung adenocarcinoma subtypes, a common diagnostic challenge? Findings An integrated model combining AI-driven radiomic features and traditional CT signs demonstrated high accuracy in differentiating invasive adenocarcinoma (AUC = 0.936), pre-invasive lesions (AUC = 0.884), and inflammatory nodules (AUC = 0.865). Clinical relevance AI-enhanced radiomics provides a non-invasive, objective tool to improve preoperative risk stratification of pulmonary nodules, potentially guiding personalized management and reducing unnecessary surgeries for benign inflammatory lesions that mimic malignancy.

Chaudhary R, Chaudhary P, Singh C, Kumar K, Singh S, Arora R, Kaur S, Vaarshney D, Acharya P, Mishra U

pubmed logopapersOct 11 2025
This study investigates the efficacy of advanced deep learning techniques, specifically convolutional neural network (CNN) (U-Net) and single-shot multibox detector (SSD), in enhancing the early detection of brain tumors, thereby facilitating timely medical intervention. Accurate brain tumor detection is paramount in medical image analysis as it involves the precise identification and localization of abnormal growths within the brain. Conventional diagnostic approaches often rely on manual analysis conducted by radiologists, which are susceptible to human error and influenced by variability in tumor size, shape, and location. In our research, we leverage U-Net, a CNN widely recognized for its effectiveness in medical image segmentation, alongside SSD, an established object detection algorithm. The results indicate that the U-Net model achieved an impressive accuracy of 97.73%, demonstrating a high level of effectiveness in segmenting brain tumors with exceptional precision. Conversely, the SSD model secured an accuracy of 58%, which, while comparatively lower, suggests that it may still serve as a valuable supplementary tool in specific scenarios and for broader applications in identifying tumor regions within medical scans. Our findings illuminate the potential of utilizing U-Net for high-precision brain tumor detection, reinforcing its position as a leading method in medical imaging. Overall, the study reinforces the important role of deep learning methods in improving early detection outcomes in neuro-oncology and highlights avenues for further exploration in enhancing diagnostic accuracy.

Peters B, Symons R, Oulkadi S, Van Breda A, Bataille Y, Kayaert P, Dewilde W, De Wilder K, Franssen WMA, Nchimi A, Ghekiere O

pubmed logopapersOct 11 2025
Fractional flow reserve (FFR) and instantaneous wave-Free Ratio (iFR) pressure measurements during invasive coronary angiography (ICA) are the gold standard for assessing vessel-specific ischemia. Artificial intelligence has emerged to compute FFR based on coronary computed tomography angiography (CCTA) images (CT-FFR<sub>AI</sub>). We assessed a CT-FFR<sub>AI</sub> deep learning model for the prediction of vessel-specific ischemia compared to invasive FFR/iFR measurements. We retrospectively selected 322 vessels from 275 patients at two centers who underwent CCTA and invasive FFR and/or iFR measurements during ICA within three months. A junior and senior radiologist at each center supervised vessel centerline-building to generate curvilinear reformats that were processed for CT-FFR<sub>AI</sub> binary outcomes (≤ 0.80 or > 0.80) prediction. Reliability for CT-FFR<sub>AI</sub> outcomes based on radiologists' supervision was assessed with Cohen's κ. Diagnostic values of CT-FFR<sub>AI</sub> were calculated using invasive FFR ≤ 0.80 (n = 224) and invasive iFR ≤ 0.89 (n = 238) as the gold standard. A multinomial logistic regression model, including all false-positive and false-negative cases, assessed the impact of patient- and CCTA-related factors on diagnostic values of CT-FFR<sub>AI</sub>. Concordance for CT-FFR<sub>AI</sub> binary outcomes was substantial (κ = 0.725, p < 0.001). Sensitivity, specificity, positive predictive value, negative predictive value, and diagnostic accuracy of CT-FFR<sub>AI</sub> in predicting vessel-specific ischemia on a per-vessel analysis, based on senior radiologists' evaluations, were 85% (58/68) and 91% (78/86), 82% (128/156) and 78% (119/152), 67% (58/86) and 70% (78/111), 93% (128/138) and 94% (119/127), and 83% (186/224) and 83% (197/238), respectively. Coronary calcifications significantly reduced the diagnostic accuracy of CT-FFR<sub>AI</sub> (p < 0.001; OR, 1.002; 95% CI 1.001-1.003). CT-FFR<sub>AI</sub> demonstrates high diagnostic performance in predicting vessel-specific coronary ischemia compared to invasive FFR and iFR. Coronary calcifications negatively affect specificity, suggesting that further improvements in spatial resolution could enhance accuracy. Question How accurately can a new deep learning model (CT-FFR<sub>AI</sub>) assess vessel-specific ischemia from CCTA non-invasively compared to two validated pressure measurements during invasive coronary angiography? Findings CT-FFR<sub>AI</sub> achieved high diagnostic accuracy in predicting vessel-specific ischemia, with high sensitivity and negative predictive value, independent of scanner type and radiologists' experience. Clinical relevance CT-FFR<sub>AI</sub> provides a non-invasive alternative to Fractional Flow Reserve and instantaneous wave-Free Ratio measurements during invasive coronary angiography for detecting vessel-specific ischemia, potentially reducing the need for invasive procedures, lowering healthcare costs, and improving patient safety.

Mohamed Hamad, Muhammad Khan, Tamer Khattab, Mohamed Mabrok

arxiv logopreprintOct 11 2025
A key challenge in ischemic stroke diagnosis using medical imaging is the accurate localization of the occluded vessel. Current machine learning methods in focus primarily on lesion segmentation, with limited work on vessel localization. In this study, we introduce Stroke Locus Net, an end-to-end deep learning pipeline for detection, segmentation, and occluded vessel localization using only MRI scans. The proposed system combines a segmentation branch using nnUNet for lesion detection with an arterial atlas for vessel mapping and identification, and a generation branch using pGAN to synthesize MRA images from MRI. Our implementation demonstrates promising results in localizing occluded vessels on stroke-affected T1 MRI scans, with potential for faster and more informed stroke diagnosis.

Jack Krolik, Jake Lynn, John Henry Rudden, Dmytro Vremenko

arxiv logopreprintOct 11 2025
This study explores the application of deep learning techniques in the automated detection and segmentation of brain tumors from MRI scans. We employ several machine learning models, including basic logistic regression, Convolutional Neural Networks (CNNs), and Residual Networks (ResNet) to classify brain tumors effectively. Additionally, we investigate the use of U-Net for semantic segmentation and EfficientDet for anchor-based object detection to enhance the localization and identification of tumors. Our results demonstrate promising improvements in the accuracy and efficiency of brain tumor diagnostics, underscoring the potential of deep learning in medical imaging and its significance in improving clinical outcomes.

Yuxiang Lai, Jike Zhong, Ming Li, Yuheng Li, Xiaofeng Yang

arxiv logopreprintOct 11 2025
Recent advances in large generative models have shown that simple autoregressive formulations, when scaled appropriately, can exhibit strong zero-shot generalization across domains. Motivated by this trend, we investigate whether autoregressive video modeling principles can be directly applied to medical imaging tasks, despite the model never being trained on medical data. Specifically, we evaluate a large vision model (LVM) in a zero-shot setting across four representative tasks: organ segmentation, denoising, super-resolution, and motion prediction. Remarkably, even without domain-specific fine-tuning, the LVM can delineate anatomical structures in CT scans and achieve competitive performance on segmentation, denoising, and super-resolution. Most notably, in radiotherapy motion prediction, the model forecasts future 3D CT phases directly from prior phases of a 4D CT scan, producing anatomically consistent predictions that capture patient-specific respiratory dynamics with realistic temporal coherence. We evaluate the LVM on 4D CT data from 122 patients, totaling over 1,820 3D CT volumes. Despite no prior exposure to medical data, the model achieves strong performance across all tasks and surpasses specialized DVF-based and generative baselines in motion prediction, achieving state-of-the-art spatial accuracy. These findings reveal the emergence of zero-shot capabilities in medical video modeling and highlight the potential of general-purpose video models to serve as unified learners and reasoners laying the groundwork for future medical foundation models built on video models.
Page 55 of 6226216 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.