Sort by:
Page 151 of 6486473 results

Zaww K, Abbas H, Vanegas Sáenz JR, Hong G

pubmed logopapersSep 19 2025
This systematic review evaluates the effectiveness of artificial intelligence (AI) models in dental implant treatment planning, focusing on: 1) identification, detection, and segmentation of anatomical structures; 2) technical assistance during treatment planning; and 3) additional relevant applications. A literature search of PubMed/MEDLINE, Scopus, and Web of Science was conducted for studies published in English until July 31, 2024. The included studies explored AI applications in implant treatment planning, excluding expert opinions, guidelines, and protocols. Three reviewers independently assessed study quality using the Joanna Briggs Institute (JBI) Critical Appraisal Checklist for Quasi-Experimental Studies, resolving disagreements by consensus. Of the 28 included studies, four were high, four were medium, and 20 were low quality according to the JBI scale. Eighteen studies on anatomical segmentation have demonstrated AI models with accuracy rates ranging from 66.4% to 99.1%. Eight studies examined AI's role in technical assistance for surgical planning, demonstrating its potential in predicting jawbone mineral density, optimizing drilling protocols, and classifying plans for maxillary sinus augmentation. One study indicated a learning curve for AI in implant planning, recommending at least 50 images for over 70% predictive accuracy. Another study reported 83% accuracy in localizing stent markers for implant sites, suggesting additional imaging planes to address a 17% miss rate and 2.8% false positives. AI models exhibit potential for automating dental implant planning with high accuracy in anatomical segmentation and insightful technical assistance. However, further well-designed studies with standardized evaluation parameters are required for pragmatic integration into clinical settings.

Larsen K, Zhao C, He Z, Keyak J, Sha Q, Paez D, Zhang X, Hung GU, Zou J, Peix A, Zhou W

pubmed logopapersSep 19 2025
Current machine learning-based (ML) models usually attempt to utilize all available patient data to predict patient outcomes while ignoring the associated cost and time for data acquisition. The purpose of this study is to create a multi-stage ML model to predict cardiac resynchronization therapy (CRT) response for heart failure (HF) patients. This model exploits uncertainty quantification to recommend additional collection of single-photon emission computed tomography myocardial perfusion imaging (SPECT MPI) variables if baseline clinical variables and features from electrocardiogram (ECG) are not sufficient. Two hundred eighteen patients who underwent rest-gated SPECT MPI were enrolled in this study. CRT response was defined as an increase in left ventricular ejection fraction (LVEF) > 5% at a 6 ± 1 month follow-up. A multi-stage ML model was created by combining two ensemble models: Ensemble 1 was trained with clinical variables and ECG; Ensemble 2 included Ensemble 1 plus SPECT MPI features. Uncertainty quantification from Ensemble 1 allowed for multi-stage decision-making to determine if the acquisition of SPECT data for a patient is necessary. The performance of the multi-stage model was compared with that of Ensemble models 1 and 2. The response rate for CRT was 55.5% (n = 121) with overall male gender 61.0% (n = 133), an average age of 62.0 ± 11.8, and LVEF of 27.7 ± 11.0. The multi-stage model performed similarly to Ensemble 2 (which utilized the additional SPECT data) with AUC of 0.75 vs. 0.77, accuracy of 0.71 vs. 0.69, sensitivity of 0.70 vs. 0.72, and specificity 0.72 vs. 0.65, respectively. However, the multi-stage model only required SPECT MPI data for 52.7% of the patients across all folds. By using rule-based logic stemming from uncertainty quantification, the multi-stage model was able to reduce the need for additional SPECT MPI data acquisition without significantly sacrificing performance.

Gorenshtein A, Liba T, Goren A

pubmed logopapersSep 19 2025
Glioma, pituitary tumors, and meningiomas constitute the major types of primary brain tumors. The challenge in achieving a definitive diagnosis stem from the brain's complex structure, limited accessibility for precise imaging, and the resemblance between different types of tumors. An alternative and promising solution is the application of artificial intelligence (AI), specifically through deep learning models. We developed multiple lightweight deep learning models ResNet-18 (both pretrained on ImageNet and trained from scratch), ResNet-34, ResNet-50, and a custom CNN to classify glioma, meningioma, pituitary tumor, and no tumor MRI scans. A dataset of 7023 images was employed, split into 5712 for training and 1311 for validation. Each model was evaluated via accuracy, area under the curve (AUC), sensitivity, specificity, and confusion matrices. We compared our models to SOTA methods such as SAlexNet and TumorGANet, highlighting computational efficiency and classification performance. ResNet pretrained achieved 98.5-99.2% accuracy and near-perfect validation metrics, with an overall AUC of 1.0 and average sensitivity and specificity both exceeding 97% across the four classes. In comparison, ResNet-18 trained from scratch and the custom CNN achieved 91.99% and 87.03% accuracy, respectively, with AUCs ranging from 0.94 to 1.00. Error analysis revealed moderate misclassification of meningiomas as gliomas in non-pretrained models. Learning rate optimization facilitated stable convergence, and loss metrics indicated effective generalization with minimal overfitting. Our findings confirm that a moderately sized, transfer-learned network (ResNet-18) can deliver high diagnostic accuracy and robust performance for four-class brain tumor classification. This approach aligns with the goal of providing efficient, accurate, and easily deployable AI solutions, particularly for smaller clinical centers with limited computational resources. Future studies should incorporate multi-sequence MRI and extended patient cohorts to further validate these promising results.

Häntze H, Xu L, Rattunde MN, Donle L, Dorfner FJ, Hering A, Nawabi J, Adams LC, Bressem KK

pubmed logopapersSep 19 2025
Annotating new classes in MRI images is time-consuming. Refining presegmented structures can accelerate this process. Many target classes lacking in MRI are supported by computed tomography (CT) models, but translating MRI to synthetic CT images is challenging. We demonstrate that CT segmentation models can create accurate MRI presegmentations, with or without image inversion. We retrospectively investigated the performance of two CT-trained models on MRI images: a general multiclass model (TotalSegmentator); and a specialized renal tumor model trained in-house. Both models were applied to 100 T1-weighted (T1w) and 100 T2-weighted fat-saturated (T2wfs) MRI sequences from 100 patients (50 male). Segmentation quality was evaluated on both raw and intensity-inverted sequences using Dice similarity coefficients (DSC), with reference annotations comprising manual kidney tumor annotations and automatically generated segmentations for 24 abdominal structures. Segmentation quality varied by MRI sequence and anatomical structure. Both models accurately segmented kidneys in T2wfs sequences without preprocessing (TotalSegmentator DSC 0.60), but TotalSegmentator failed to segment blood vessels and muscles. In T1w sequences, intensity inversion significantly improved TotalSegmentator performance, increasing the mean DSC across 24 structures from 0.04 to 0.56 (p < 0.001). Kidney tumor segmentation demonstrated poor performance in T2wfs sequences regardless of preprocessing. In T1w sequences, inversion improved tumor segmentation DSC from 0.04 to 0.42 (p < 0.001). CT-trained models can generalize to MRI when supported by image augmentation. Inversion preprocessing enabled segmentation of renal cell carcinoma in T1w MRI using a CT-trained model. CT models might be transferable to the MRI domain. CT-trained artificial intelligence models can be adapted for MRI segmentation using simple preprocessing, potentially reducing manual annotation efforts and accelerating the development of AI-assisted tools for MRI analysis in research and future clinical practice. CT segmentation models can create presegmentations for many structures in MRI scans. T1w MRI scans require an additional inversion step before segmenting with a CT model. Results were consistent for a large multiclass model (i.e., TotalSegmentator) and a smaller model for renal cell carcinoma.

Olesen ASO, Miger K, Ørting SN, Petersen J, de Bruijne M, Boesen MP, Andersen MB, Grand J, Thune JJ, Nielsen OW

pubmed logopapersSep 19 2025
Dyspnea is a common cause of hospitalization, posing diagnostic challenges among older adult patients with multimorbid conditions. Chest computed tomography (CT) scans are increasingly used in patients with dyspnea and offer superior diagnostic accuracy over chest radiographs but face limited use due to a shortage of radiologists. This study aims to develop and validate artificial intelligence (AI) algorithms to enable automatic analysis of acute CT scans and provide immediate feedback on the likelihood of pneumonia, pulmonary embolism, and cardiac decompensation. This protocol will focus on cardiac decompensation. We designed a retrospective method development and validation study. This study has been approved by the Danish National Committee on Health Research Ethics (1575037). We extracted 4672 acute chest CT scans with corresponding radiological reports from the Copenhagen University Hospital-Bispebjerg and Frederiksberg, Denmark, from 2016 to 2021. The scans will be randomly split into training (2/3) and internal validation (1/3) sets. Development of the AI algorithm involves parameter tuning and feature selection using cross validation. Internal validation uses radiological reports as the ground truth, with algorithm-specific thresholds based on true positive and negative rates of 90% or greater for heart and lung diseases. The AI models will be validated in low-dose chest CT scans from consecutive patients admitted with acute dyspnea and in coronary CT angiography scans from patients with acute coronary syndrome. As of August 2025, CT data extraction has been completed. Algorithm development, including image segmentation and natural language processing, is ongoing. However, for pulmonary congestion, the algorithm development has been completed. Internal and external validation are planned, with overall validation expected to conclude in 2025 and the final results to be available in 2026. The results are expected to enhance clinical decision-making by providing immediate, AI-driven insights from CT scans, which will be beneficial for both clinicians and patients. DERR1-10.2196/77030.

Wang Y, Liang N, Ren J, Zhang X, Shen Y, Cai A, Zheng Z, Li L, Yan B

pubmed logopapersSep 19 2025
Spectral computed tomography (CT) is a critical tool in clinical practice, offering capabilities in multi-energy spectrum imaging and material identification. The limited-angle (LA) scanning strategy has attracted attention for its advantages in fast data acquisition and reduced radiation exposure, aligning with the as low as reasonably achievable principle. However, most deep learning-based methods require separate models for each LA setting, which limits their flexibility in adapting to new conditions. In this study, we developed a novel Visual-Language model-assisted Spectral CT Reconstruction (VLSR) method to address LA artifacts and enable multi-setting adaptation within a single model. The VLSR method integrates the image-text perception ability of visual-language models and the image generation potential of diffusion models. Prompt engineering is introduced to better represent LA artifact characteristics, further improving artifact accuracy. Additionally, a collaborative sampling framework combining data consistency, low-rank regularization, and image-domain diffusion models is developed to produce high-quality and consistent spectral CT reconstructions. The performance of VLSR is superior to other comparison methods. Under the scanning angles of 90° and 60° for simulated data, the VLSR method improves peak signal noise ratio by at least 0.41 dB and 1.13 dB compared with other methods. VLSR method can reconstruct high-quality spectral CT images under diverse LA configurations, allowing faster and more flexible scans with dose reductions.

Yang H, George Y, Mehta D, Lin L, Chen C, Yang D, Sun J, Lau KF, Bain C, Yang Q, Parsons MW, Ge Z

pubmed logopapersSep 19 2025
Predicting the final location and volume of lesions in acute ischemic stroke (AIS) is crucial for clinical management. While CT perfusion (CTP) imaging is routinely used for estimating lesion outcomes, conventional threshold-based methods have limitations. We developed specialized outcome prediction deep learning models that predict infarct core in successful reperfusion cases and the combined core-penumbra region in unsuccessful reperfusion cases. We developed single-modal and multi-modal deep learning models using CTP parameter maps to predict the final infarct lesion on follow-up diffusion-weighted imaging (DWI). Using a multi-center dataset from multiple sites, deep learning models were developed and evaluated separately for patients with complete recanalization (CR, successful reperfusion, n=350) and no recanalization (NR, unsuccessful reperfusion, n=138) after treatment. The CR model was designed to predict the infarct core region, while the NR model predicted the expanded hypoperfused tissue encompassing both core and penumbra regions. Five-fold cross-validation was performed for robust evaluation. The multi-modal 3D nnU-Net model demonstrated superior performance, achieving mean Dice scores of 35.36% in CR patients and 50.22% in NR patients. This significantly outperformed the current clinical used method, providing more accurate outcome estimates than the conventional single-modality threshold-based measures which yielded dice scores of 15.73% and 39.71% for CR and NR groups respectively. Our approach offered both successful reperfusion and unsuccessful reperfusion estimations for potential treatment outcomes, enabling clinicians to better evaluate treatment eligibility for reperfusion therapies and assess potential treatment benefits. This advancement facilitates more personalized treatment recommendations and has the potential to significantly enhance clinical decision-making in AIS management by providing more accurate tissue outcome predictions than conventional single-modality threshold-based approaches. AIS=acute ischemic stroke; CR=complete recanalization; NR=no recanalization; DT=delay time; IQR=interquartile range; GT=ground truth; HD95=95% Hausdorff distance; ASSD=average symmetric surface distance; MLV=mismatch lesion volume.

Memon Y, Zeng F

pubmed logopapersSep 19 2025
Twin-to-Twin Transfusion Syndrome (TTTS) is a complex prenatal condition in which monochorionic twins experience an imbalance in blood flow due to abnormal vascular connections in the shared placenta. Fetoscopic Laser Photocoagulation (FLP) is the first-line treatment for TTTS, aimed at coagulating these abnormal connections. However, the procedure is complicated by a limited field of view, occlusions, poor-quality endoscopic images, and distortions caused by artifacts. To optimize the visualization of placental vessels during surgical procedures, we propose Hybrid-MedNet, a novel hybrid CNN-transformer network that incorporates multi-dimensional deep feature learning techniques. The network introduces a BiPath Tokenization module that enhances vessel boundary detection by capturing both channel dependencies and spatial features through parallel attention mechanisms. A Context-Aware Transformer block addresses the weak inductive bias problem in traditional transformers while preserving spatial relationships crucial for accurate vessel identification in distorted fetoscopic images. Furthermore, we develop a Multi-Scale Trifusion Module that integrates multi-dimensional features to capture rich vascular representations from the encoder and facilitate precise vessel information transfer to the decoder for improved segmentation accuracy. Experimental results show that our approach achieves a Dice score of 95.40% on fetoscopic images, outperforming 10 state-of-the-art segmentation methods. The consistent superior performance across four segmentation tasks and ten distinct datasets confirms the robustness and effectiveness of our method for diverse and complex medical imaging applications.

Gong H, Kharat S, Wellinghoff J, El Sadaney AO, Fletcher JG, Chang S, Yu L, Leng S, McCollough CH

pubmed logopapersSep 19 2025
To facilitate task-driven image quality assessment of lesion detectability in clinical photon-counting-detector CT (PCD-CT), it is desired to have patient image data with known pathology and precise annotation. Standard patient case collection and reference standard establishment are time- and resource-intensive. To mitigate this challenge, we aimed to develop a projection-domain lesion insertion framework that efficiently creates realistic patient cases by digitally inserting real radiopathologic features into patient PCD-CT images. &#xD;Approach. This framework used an artificial-intelligence-assisted (AI) semi-automatic annotation to generate digital lesion models from real lesion images. The x-ray energy for commercial beam-hardening correction in PCD-CT system was estimated and used for calculating multi-energy forward projections of these lesion models at different energy thresholds. Lesion projections were subsequently added to patient projections from PCD-CT exams. The modified projections were reconstructed to form realistic lesion-present patient images, using the CT manufacturer's offline reconstruction software. Image quality was qualitatively and quantitatively validated in phantom scans and patient cases with liver lesions, using visual inspection, CT number accuracy, structural similarity index (SSIM), and radiomic feature analysis. Statistical tests were performed using Wilcoxon signed rank test. &#xD;Main results. No statistically significant discrepancy (p>0.05) of CT numbers was observed between original and re-inserted tissue- and contrast-media-mimicking rods and hepatic lesions (mean ± standard deviation): rods 0.4 ± 2.3 HU, lesions -1.8 ± 6.4 HU. The original and inserted lesions showed similar morphological features at original and re-inserted locations: mean ± standard deviation of SSIM 0.95 ± 0.02. Additionally, the corresponding radiomic features presented highly similar feature clusters with no statistically significant differences (p>0.05). &#xD;Significance. The proposed framework can generate patient PCD-CT exams with realistic liver lesions using archived patient data and lesion images. It will facilitate systematic evaluation of PCD-CT systems and advanced reconstruction and post-processing algorithms with target pathological features.

Islam J, Furqon EN, Farady I, Alex JSR, Shih CT, Kuo CC, Lin CY

pubmed logopapersSep 19 2025
Alzheimer's Disease (AD) diagnostic procedures employing Magnetic Resonance Imaging (MRI) analysis encounter considerable obstacles pertaining to reliability and accuracy, especially when deep learning models are utilized within clinical environments. Present deep learning methodologies for MRI-based AD detection frequently demonstrate spatial dependencies and exhibit deficiencies in robust validation mechanisms. Extant validation techniques inadequately integrate anatomical knowledge and exhibit challenges in feature interpretability across a range of imaging conditions. To address this fundamental gap, we introduce a reverse validation paradigm that systematically repositions anatomical structures to test whether models recognize features based on anatomical characteristics rather than spatial memorization. Our research endeavors to rectify these shortcomings by proposing three innovative methodologies: Feature Position Invariance (FPI) for the validation of anatomical features, biomarker location augmentation aimed at enhancing spatial learning, and High-Confidence Cohort (HCC) selection for the reliable identification of training samples. The FPI methodology leverages reverse validation approach to substantiate model predictions through the reconstruction of anatomical features, bolstered by our extensive data augmentation strategy and a confidence-based sample selection technique. The application of this framework utilizing YOLO and MobileNet architecture has yielded significant advancements in both binary and three-class AD classification tasks, achieving state-of-the-art accuracy with enhancements of 2-4 % relative to baseline models. Additionally, our methodology generates interpretable insights through anatomy-aligned validation, establishing direct links between model decisions and neuropathological features. Our experimental findings reveal consistent performance across various anatomical presentations, signifying that the framework effectively enhances both the reliability and interpretability of AD diagnosis through MRI analysis, thereby equipping medical professionals with a more robust diagnostic support system.
Page 151 of 6486473 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.